title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
Robust Autonomy Emerges from Self-Play
Accept (poster)
Summary: The authors introduce a simulator which is capable of efficiently simulating joint traffic scenarios at scale. Using this simulator, they train a population of driving policies using self-play reinforcement learning. When evaluated on CARLA, NuPlan and Waymax, the authors report state of the art performance despite never training on these datasets. ## Update After Rebuttal During the rebuttal, the authors addressed my concerns regarding fine tuning of the reward and committed to including additional references. I believe that this work warrants acceptance. However, I believe that the manner in which the authors report their performance on Waymax is misleading. I strongly urge them to improve the clarity as discussed in the my review to improve their work further. Claims And Evidence: Although technically correct, I have some concerns regarding the framing of the author's claims of emergent state of the art behaviour. The authors use three driving benchmarks to support their claim. Of these, NuPlan is a strong result, as it is widely used in this field. CARLA is less commonly used. I have no issues with either of these evaluations. My major concern is with the author's Waymo based evaluations. In the literature, the Waymo Open Motion Dataset is an extremely popular and competitive benchmark. The authors compare against the baseline methods reported in the original 2023 Waymax simulator paper [1] and prominently report their improvements (ie Figure 1 right). To my knowledge, I have never seen this paper's metrics used as a benchmark for evaluating policy performance. As I understand it, the official and substantially more widely used evaluation for the Waymo Open Motion Dataset is the Waymo Sim Agents Challenge. On this domain, the authors do not achieve state of the art performance, despite only comparing their methods to 2023 leaderboard and not the newer 2024 leaderboard where state of the art has further improved. [1] "Waymax: An accelerated, data-driven simulator for large-scale autonomous driving research" Gulino et al 2023 Methods And Evaluation Criteria: In addition to my concerns about the SoTA claim, I have a few concerns about the evaluation criteria 1. For the Waymax benchmark, the authors propose a Score metric which is not present in the original Waymax paper as their summary metric. Although this metric captures realism regarding collisions, off-road driving, and distance driven it does not incorporate the displacement error with respect to the logged policy. Notably, Gigaflow performs worse than all policies except DQN on this metric. 2. The self-play policy is conditioned on $C_{reward}$ which the authors argue has a significant impact on the behaviour of the agent. However, $C_{reward}$ is not reported for any of their benchmarks. It is unclear how much hyperparameter tuning of $C_{reward}$ was needed to achieve state of the art performance on each dataset. 3. As noted in the Claims and Evidence section, WOSAC should be evaluated on the newer 2024 leaderboard. Theoretical Claims: N/A Experimental Designs Or Analyses: See Methods and Evaluation Criteria Supplementary Material: I read through the appendix. However, I was not able to find the referenced infraction video supplementary material. Relation To Broader Scientific Literature: This paper provides a counterpoint to the large body of literature related to learning traffic behavioral models. Specifically, the majority of prior art is based off various imitation learning techniques, leveraging large scale datasets of recorded traffic behaviour. The quality and realism of the policy reported in this paper opens up new possibilities for developing traffic policies through RL. Essential References Not Discussed: The authors fail to include several more recent SoTA policy models as they do not compare to the 2024 WOSAC leaderboard. For example - SMART: Scalable Multi-agent Real-time Motion Generation via Next-token Prediction (Wu et al. 2024)* - BehaviorGPT: Smart Agent Simulation for Autonomous Driving with Next-Patch Prediction (Zhou et al. 2024) This paper references two papers on fine-tuning behavioural cloning polcies with RL. I believe this subject area is highly related and warrants further discussion. For example: - Improving Agent Behaviors with RL Fine-tuning for Autonomous Driving (Peng et al. 2024) - Learning to Drive via Asymmetric Self-Play (Zhang et al 2024) - Scaling Is All You Need: Autonomous Driving with JAX-Accelerated Reinforcement Learning (Harmel et al 2023) * Smart 7M results from Table 1 of Wu et al. corresponds to the InteractionFormer results from table A8 of your paper Other Strengths And Weaknesses: In my opinion, this paper is very well done in almost every aspect. It shows evidence that large scale reinforcement learning is a promising alternative to the dominant paradigm of supervised learning for traffic modelling. The analysis of the policy performance is well done and the details provided in the appendix are thorough. Frankly, had the authors simply reported their reasonable WOSAC performance instead of claiming Waymo SoTA through their "Waymax benchmark" I would wholeheartedly recommend acceptance. Their SoTA claim in this area undermines an otherwise excellent paper. Strengths - The performance reported by this paper is remarkable given the lack of supervised training on human data. - The qualitative analysis of the resulting policies is robust - The paper is clear and well written. - The supplementary provides many of the details required to reproduce the paper Weaknesses - While obviously a strong behavioural policy, the state of the art claims on Waymo data are unconvincing. - The authors report training time compute but do not report hyperparameter tuning procedures or compute budget - Although the authors spend a large portion of the paper describing the details of their simulator, they do not offer to open source it, making reproduction of their results challenging. Other Comments Or Suggestions: No other comments Questions For Authors: My primary grievances with this paper surround reporting of evaluation hyperparameters and results on Waymo data. I will gladly raise my score if my questions are answered in these areas 1. What reward conditional signal was used for each of the reported benchmarks. How much tuning (if any) was done to select the reward hyper parameters for each benchmark? 2. How does the performance of SoTA WOSAC policies compare against your policy on the Waymax benchmark? I would specifically like to see the performance of SMART which is available open source [here](https://github.com/rainmaker22/smart). 3. Have any other prior art used the baseline implementations presented in the Waymax paper as a benchmark? I was unable to find any other work which uses these results as a benchmark. 4. The performance of Gigaflow seems highly dependent on the details of your simulator. Will you open-source this simulator to allow others to reproduce your results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review, and list a detailed reply for all questions raised. To contextualize this rebuttal we would like to clarify one of the main differences between Agent and Environment Simulation (WOSAC) and learning a driving policy (this work). The goal of WOSAC (and motion forecasting more generally) is to learn how human traffic generally behaves. If a vehicle collides or goes off-road (in the real world, or in a log due to sensor noise) a good agent and environment simulator will simulate this event. The WOSAC benchmark contains 3% collision events and 12% off-road events. A competent driving policy should never collide, and rarely go off-road. For example, the best Agent and Environment Simulations (SMART etc) should reproduce the logged trajectory almost perfectly to maximize their score. A good policy will not because certain noisy trajectories lead to collisions and off-road events. It was never our aim to claim to be state-of-the-art on WOSAC, nor should a competent driving policy be expected to. We show that in interactive driving simulations (CARLA, nuPlan, Waymax) our policy is a competent driver which gets to its goals safely, robustly, and efficiently. However, it does not necessarily achieve this in a human-like manner, nor does it imitate the sense noise of the waymo perception stack. This leads to a lower WOSAC score. The only mention of WOSAC in the main paper is on line 429. We primarily mention it as it is interesting, but not critical to the claims of the paper, that we achieve a non-trivial WOSAC score. > Additionally references Thank you. We will add them in the final version. > For the Waymax benchmark, the authors propose a Score metric which is not present in the original Waymax paper as their summary metric... It is unfortunately quite easy to trade individual metrics off in the Waymax benchmark. Yes BC and Wayformer perform better in Log ADE (following the demonstrated trajectory). They clearly should as this was their training objective. However, they fall short across all other simulation objectives with a collision rate of 4+%. > As noted in the Claims and Evidence section, WOSAC should be evaluated on the newer 2024 leaderboard. We are working on this, and hopefully will have numbers for the final version. > It is unclear how much hyperparameter tuning of C_reward was needed to achieve state of the art performance on each dataset. / How much tuning (if any) was done to select the reward hyper parameters for each benchmark? All benchmarks use the same reward hyperparameters (the mean of the randomized range). We did not tune reward parameters as we considered this overfitting to the target benchmark (and thus would invalidate our zero-shot generalization claim). There is a single difference between all benchmarks, which is the radius of the target region. Carla requires more precise navigation (the agent needs to follow the given route points closely.) than Waymax and nuPlan and uses a tighter target radius. We will add these values to the appendix in the final version. > How does the performance of SoTA WOSAC policies compare against your policy on the Waymax benchmark? I would specifically like to see the performance of SMART which is available open source here. SMART is certainly performing better at Agent and Environment Simulation (WOSAC). The aim of the WOSAC results was to show the policy learns human-like driving, and not to claim state-of-the-art on this benchmark. As such, we sharply outperform SMART on collision scores, our primary metric of interest, but underperform on distance to logged trajectories, which our paper did not target and is not always relevant for a self-driving car. > Have any other prior art used the baseline implementations presented in the Waymax paper as a benchmark? As developed, the tools developed in the waymax paper are primarily intended for the development of human-like simulation agents. However, as the data is interesting and can be used to measure robustness, other works have started to use it in similar manners to those used here [1, 2, 3]. [1] Cornelisse, D., Pandya, A., Joseph, K., Suárez, J., & Vinitsky, E. (2025). Building reliable sim driving agents by scaling self-play. arXiv preprint arXiv:2502.14706. [2] Xiao, L., Liu, J. J., Ye, X., Yang, W., & Wang, J. (2024). EasyChauffeur: A Baseline Advancing Simplicity and Efficiency on Waymax. arXiv preprint arXiv:2408.16375. [3] Charraut, V., Tournaire, T., Doulazmi, W., & Buhet, T. (2025). V-Max: Making RL practical for Autonomous Driving. arXiv preprint arXiv:2503.08388. > Will you open-source this simulator to allow others to reproduce your results? We would very much like to open-source this work, but cannot commit due to approval requirements beyond our control.
Summary: This paper presents a batched driving simulator called GIGAFLOW that enables large-scale self-play, i.e., randomly initializing scenarios and learning robust driving behaviors with pre-defined RL rewards. The GIGAFLOW world creates worlds based on eight maps, spawns agents on random locations with randomly perturbed accelerations, sizes, goals, and other necessary parameters. It defines rewards w.r.t. safety, progress, traffic rules, and comforts. It trains a unified policy for all agents with conditions of various driving behaviors as input. The policy is trained with PPO and a variant of prioritized experience replay to filter unimportant samples. Through learning on GIGAFLOW only, the policy achieves state-of-the-art performance on multiple driving benchmarks, including CARLA, nuPlan, Waymax, and Waymo Sim Agent. ## update after rebuttal The authors clarified most of my concerns and confusion on technical details during the rebuttal. This paper presents a promising direction of self-play to realize robust motion planning. The novelty, contribution, and final results all demonstrate its high quality. I did not give a higher score mainly because self-play has been adopted in other domains, which limits the broader impact on the ML community. Its reliance on accurate maps and perception results may also limit its real-world application, but I believe the industry can acquire insights from the paper and address the bottlenecks for large-scale deployment soon. Claims And Evidence: The paper's claims are solid overall. The method does not involve human data and uses self-play with RL only, in alignment with the claims. The experimental results show that it realizes state-of-the-art performance on benchmarks and outperforms specific algorithms. Methods And Evaluation Criteria: The method, self-play, is simple and sound. To make it effective, the authors propose several improvements, including hash-based map encoding for acceleration, the simple importance sampling strategy, careful randomization and filtering designs, etc. These methods make sense for the problem and are validated in the ablation study in the supplementary material. Theoretical Claims: Not applicable. This paper does not include theoretical claims. Experimental Designs Or Analyses: The authors incorporate multiple benchmarks, including open-loop and closed-loop evaluations. I think the experimental results and analysis are valid and sound, with concrete descriptions on how experimental settings are adapted in the supplementary material. Supplementary Material: I have read all the parts in the supplementary materials. The authors provide great details about implementations on the simulator, randomization, models, rewards, and results on several benchmarks, etc. The simulator designs are important to grasp how they accelerate batched training, and the randomization parts are crucial to assess the technical novelty of this work. Relation To Broader Scientific Literature: The methodology in this paper is mainly self-play, which has been explored in some existing applications such as StarCraft gaming and Go Chess. Essential References Not Discussed: I think one closely related prior work is GPUDrive [1]. It also supports large-scale simulation or RL training with 1 million FPS, which is comparable to the speed of GIGAFLOW (1.2M as shown in Table A1). Though GPUDrive mainly targets fast training agents on WOMD, it can probably support self-play as well. Additionally, GPUDrive supports sensor simulation. I suggest the authors include a discussion of the comparisons with GPUDrive in the revision, which may be important to evaluate the contribution of GIGAFLOW in the aspect of batched simulation. [1] Kazemkhani, S., Pandya, A., Cornelisse, D., Shacklett, B. and Vinitsky, E., 2024. GPUDrive: Data-driven, multi-agent driving simulation at 1 million FPS. arXiv preprint arXiv:2408.01584. Other Strengths And Weaknesses: Strength: 1. The map hashing part involves great engineering effort, which is crucial to designing an accelerated simulator for batched RL training. 2. Conditioning various agent types to the policy and using a unified policy for training is interesting. 3. Great visualizations. Very neat and informative. 4. Results are strong, with a single policy adapted to various benchmarks. Weakness: 1. One potential weakness of this method is the reliance on maps. This work put great efforts into encoding maps to accelerate information acquisition during training and calculating geometric constraints. However, the vectorized representation requires accurate high-definition (HD) maps to achieve robust planning, which are not available for most areas. To enlarge the functional regions, the current driving industry is focusing on map-free or map-light methods, using navigation maps or (inaccurate) crowdsourced maps only. Therefore, this issue may limit the broader application for the real-world driving industry. 2. The paper adopts the Frenet coordinates, which potentially encourage the vehicles driving on the center of lanes. This is reasonable in most cases and is prevailing in current motion benchmarks. However, corner cases in real life sometimes do not satisfy this assumption. For example, it requires the vehicle to deviate to the opposite lane to bypass an obstacle in the front. The method adopts a discrete set of actions (12 in total) as well to simplify the problem. I understand that the authors have fully analyzed the failure cases in current benchmarks, yet I am wondering if it is possible to come up with more challenging scenarios to discuss the drawbacks of the proposed method. Besides, another potential benchmark in academia could be Bench2Drive [2]. This weakness also relates to W1 in terms of maps. 3. I hope the authors could provide further clarifications on Figure A3. The results seem to indicate that randomization mainly works for CARLA experiments, while the impact for nuPlan and Waymax is relatively minor. I think the ablations in the appendix, Sec. F, are more important to get more insights from the proposed method, compared to the visualizations in Figure 4. 4. The RL algorithm is trained in GIGAFLOW with billions of miles, but a large portion of data is filtered. Could the authors provide a more detailed number for the ultimate data used and compare it with specific data in various benchmarks? As those dataset-specific algorithms assume all the public dataset is useful, this comparison would be more direct and intuitive to see how much valuable data is needed. [2] Jia, X., Yang, Z., Li, Q., Zhang, Z. and Yan, J., 2024. Bench2drive: Towards multi-ability benchmarking of closed-loop end-to-end autonomous driving. arXiv preprint arXiv:2406.03877. Other Comments Or Suggestions: 1. Typos: Line 29/401 (e.g.,), Line 85 (Dashed Lines -> The trophy and the thumbtack?), Line761 (intersect), Line979 (GPUs), additional blank lines before Eq. 1 and Eq. 5. 2. There are multiple sentences stating that videos are in the supplementary material in benchmark evaluations (Sec. E, Lines 1088/1051), but they are not provided in fact. 3. It would be helpful if the work could be open-sourced to benefit the community and the industry. Questions For Authors: 1. What is the meaning of "long-form evaluation" in Sec. F.3? Besides, what is the relationship of long-form evaluation with the decision-making frequency in Figure A4? I hope the authors can clarify this section in the revision. 2. In multiple places, the authors state that they use only a minimalistic reward function (Line 43) or do not design delicate reward terms (Line 149). Meanwhile, I think the rewards listed in Sec. B.3 and Table A2 have already included the most adaptable rewards in RL for driving. Could the authors clarify which other rewards can potentially be used for more delicate designs? 3. The description in Lines 258-268 is very interesting. But I am curious how the policy autonomously adapts to various scoring criteria if they are contradictory. 4. Would using a single policy to train all agents, with various conditions and randomization, result in unstable training, especially during the beginning stages? 5. How are the routing information and goals applied for short-term planning benchmarks like nuPlan? Will there be label leakage or additional information used when comparing with other methods? 6. Sec. E.4 adopts WOSAC 2023 results. How about the public 2024 WOSAC leaderboard? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review, and list a detailed reply for all questions raised. > GPUDrive We are aware of the work, and have been in close communication with the authors of GPUDrive from the start of their project. We will cross-cite each other's work for the camera-ready paper. > ... requires accurate high-definition (HD) maps to achieve robust planning, which are not available for most areas True, training requires HD-maps. Unclear how big the reliance on HD maps is during deployment. Our policy generalizes across several different vector map formats (Carla, nuPlan, Waymo). It does not directly see the vector representation, but rather an intermediate representation. A performant sense stack should be able to construct this representation. > The paper adopts the Frenet coordinates, which potentially encourage the vehicles driving on the center of lanes This is a misunderstanding. The Frenet frame provides a policy with a rotation invariant representation of its local world. However it does not limit its driving ability. The policy choses to go into the opposing lane if required (see Figure 4). After all, PPO optimized the policy on such a huge amount of experience that it seems inconceivable that biases like lane-centering would survive only because of the input representation. > ... Sec. F.3? ... Figure A4? Our training environment is augmented with erratic drivers and severe dynamics noise. Sec F.3 evaluates robustness under ordinary conditions. In this regime, our agents achieve over 3 million kms of driving between incidents of any kind. This corresponds to approximately 99.99999997% of transitions in the simulator being collision-free. To our best knowledge, this is one of the first examples in the literature where such level of robustness is demonstrated for a stochastic policy trained with RL. The policy drives better with a higher decision making frequency (Fig A4) as it has to commit to "shorter" actions (lower reaction time). Will clarify in final paper. > ... Could the authors clarify which other rewards can potentially be used for more delicate designs? Our reward encodes road rules, but not behavior or expected interactions between agents. We do not use: - Penalties shaping expected proximity to other agents - Expected speeds around other agents - Forward progress These terms trade short term performance with generalizability. > ... Lines 258-268 ... how the policy autonomously adapts to various scoring criteria if they are contradictory. Our eval is zero-shot. We use a single reward condition for all benchmarks, there is no autonomous adaptation. > Would using a single policy ... result in unstable training, ...? It does not. Ultimately, the major reward the agent receives is to reach a goal. The randomized components of the reward are secondary. This is also reflected in the behaviors learned. The agent first learns to move towards the goal and collect close-by goals, only after that will it pay attention to red lights, driving direction etc. The training curves are stable. We will add them to the appendix. > Routing information / potential label leakage? We do not use the expert trajectory in any way (no leakage). For nuPlan, we use a point in the final lane segment as the goal (no routing information, our policy learned to route locally during training). For waymax, we use the final location of the log ego vehicle projected forward 20m along the lane to ensure that the agent continues driving if it arrived early. In neither setup, our agent sees more information that baselines. > 2024 WOSAC leaderboard? Working on it, unfortunately wont finish in time for this rebuttal. > Clarifications on Figure A3 and randomization The metrics for nuPlan and Waymax are very compressed. Both benchmarks have a certain number of scenarios that are unsolvable and thus a non-zero optimal policy. Carla has fewer scenarios that are completely unsolvable, and its metric degrades less gracefully. Reward randomization does help for all methods (nuPlan 93.08 -> 93.17; Waymax 98.79 -> 98.92). > Detailed number of data used. Over 99% of trajectories are used during training and need to be simulated to completion to calculate rewards and advantages for each timestep. Figure 1 is an accurate comparison of sizes. Gigaflow is about five orders of magnitude larger than prior public attempts. We filter 80% of time-steps (see section 2.2). This filtering happens after simulating rollouts. Even accounting for this filtering, RL training sees at least four orders of magnitude more data than a single pass over any publicly available data would provide. > Typos Thank you. Will be fixed. > Supplementary videos Sorry, we forgot to upload them: https://drive.google.com/drive/folders/1hlDrPDQIu56-iq039fwOa18g8aUGSKYl?usp=sharing > Open-sourcing We would very much like to open-source this work, but cannot commit due to approval requirements beyond our control. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. > ... HD maps Agree. This effect is unclear for the method currently, and thus the limitation holds and is worth future work, together with the end-to-end simulation mentioned by Reviewer SjGz. > ... Figure A4 I think the authors could add a one-sentence summary and discussion like the one in the rebuttal in the paper. Currently the figure is not described with a conclusion. > ... how the policy autonomously adapts to various scoring criteria if they are contradictory. Yes, I know that the policy is evaluated zero-shot. I do not mean the policy is adapted or fine-tuned. I am just curious why the zero-shot policy can achieve robust performance on both benchmarks if the two benchmarks have contradictory scoring criteria. I am wondering if the authors can provide a deeper analysis on this point. I think this may relate to the discussion of Reviewer EHkB as well. If the same scenarios are evaluated with two contradictory metrics, the policy should not yield satisfactory performance under both criteria. I assume that though different benchmarks favor or highlight different metrics, the overall criteria are the collision and running time, which are not contradictory. In this sense, the word "contradictory" may not be very accurate. --- Reply to Comment 1.1.1: Comment: > ... Figure A4 Happy to add this comment. > ... how the policy autonomously adapts to various scoring criteria if they are contradictory. The main factors to trade off between the benchmarks are: - Comfort (nuPlan, Waymax) - Traffic violations (CARLA) - Forward progress (CARLA, nuPlan) Many scoring criteria are shared between all benchmarks: - Collisions - Off-road events - Goal reaching We have trained some policies with a subset of rewards that perform better on specific benchmarks, but fail to generalize to all. For nuPlan, a policy without stop-lines generally performs better. Red light violations are not properly penalized in nuPlan and running lights (without colliding) will yield much better route progress. For CARLA, removing the comfort penalty yields better results. NPC vehicles in CARLA can exhibit braking beyond 2g, a policy that can react to this without regards to a drivers comfort generally avoids a few more collisions. Our strategy has been to include all rewards that pertain to good driving during training and not to exploit each individual simulator. Fortunately this was sufficient to outperform state-of-the-art methods with a single policy. We are happy to add this discussion to the final version of the paper.
Summary: This paper presents GIGAFLOW, a batched simulator that supports large-scale simulation to train robust driving policies via self-play. Through learning from a massive scale of self-play, the learned policy demonstrates superior performance compared to the prior state-of-the-art on multiple challenging benchmarks, without seeing any human demonstration in training. Claims And Evidence: Yes, the claims made in this submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Strengths of methods and evaluation: - **Large-scale reinforcement learning for driving**. The key component of the proposed method is to apply large-scale reinforcement learning for learning robust driving policy. Although reinforcement learning has been adopted for driving in prior works, it is the first time attempt to conduct reinforcement learning on such a large scale with over 42 years of experience. - **Strong generalization of learned policy on multiple benchmarks**. Due to the learning on massive experience with a wide span of scenarios, the learned policy demonstrates strong generalization to unseen environments in a zero-shot manner. Additionally, it sets new records for several challenging benchmarks like CARLA and nuplan without special adaptations or additional learning. These results provide encouraging signals for future studies on applying reinforcement learning to broader areas including driving and robotics. Issue of the method: - **Lack of study on the model design**. This work mainly focuses on the simulation and experiment aspects with a lack of detailed investigation for model designs. For instance, according to L145-150 (right), this method uses a single policy for different types of agents with conditioning $C_{conditioning}$ as an indicator for the agent type. However, the intuition and motivation to use one policy for all agents is under-discussed in this work. Furthermore, the ablation study of comparing shared policy and separate policies for different types of agents is not reported in this work. Given the distinction between different agents, such investigations could further improve the soundness of this work. Theoretical Claims: No issues with the theoretical aspect of this work. Experimental Designs Or Analyses: Strengths of experiments: - **Extensive experimental results on multiple benchmarks**. This work presents extensive experimental results to demonstrate the superiority of the proposed method, such as Fig1 and Table (A5, A6, A7) in supp. Providing cross-environment results would empirically require a lot of effort, and thus should be encouraged and valued. - **Abundant qualitative analyses**. Qualitative visualizations in Figures 3 and 4 enhance the readability of this work and clearly illustrate the policy behavior during and after training. Supplementary Material: Yes, I viewed all supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper relate to the areas of autonomous driving, traffic simulation, large-scale training, and applicable reinforcement learning. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Other Strengths: - This paper is well-written, well-organized, and easy to follow. - This paper focuses on a long-standing yet challenging problem, which is to establish a robust driving agent without extensive human demonstrations. This paper shows encouraging results in that research direction. Limitation yet NOT weaknesses of this work: - Although this work shows promising results of learning robust policy via self-play, it is notable that all policies presented in this paper operate on highly abstracted vector space without sensor inputs, which is different from real-world driving agents that reason over their online sensor data. Therefore, there is still a significant gap in applying this method to establish real-world driving policy. The gap lies in the difficulty of high-dimensional sensor simulation which requires much more computation compared to low-dimensional vector-space simulation in this paper. However, we should be aware that sensor simulation refers to another branch of work and is not the focus of this paper, thus it should not be viewed as a weakness of this work. Also see strengths and weaknesses in the above sections. Other Comments Or Suggestions: N/A. Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review, and list a detailed reply for all questions raised. > The intuition and motivation to use one policy for all agents is under-discussed in this work. Furthermore, the ablation study of comparing shared policy and separate policies for different types of agents is not reported in this work. N separate policies are technically hard to implement efficiently at scale for two reasons: Computational cost grows, as there is a significant amount of routing required from agent states to potentially non-uniformly batched inference and training steps / sequences Separate policies likely train slower: Each separate policy sees 1/N amount of experience. Prior evidence suggests that embedding a population in a single policy can accelerate learning [1]. The amount of policies required to cover our continuous reward parametrization can potentially be quite large. [1] Liu, Siqi, et al. "Neural population learning beyond symmetric zero-sum games." arXiv preprint arXiv:2401.05133 (2024). > Although this work shows promising results of learning robust policy via self-play, it is notable that all policies presented in this paper operate on highly abstracted vector space without sensor inputs, which is different from real-world driving agents that reason over their online sensor data. The reviewer is right that scaling this work to sensor simulation or real-world deployment will require additional work, and presents an exciting avenue of future work. One thing to note is that we are the first work to show that a single policy can generalize well across several different vector maps (Carla, nuPlan, Waymo). The main reason for this is that our policy does not directly see the vector representation, but rather lane boundaries etc. These inputs can likely be inferred by a sense stack, although getting a good noise profile will be part of future work. --- Rebuttal Comment 1.1: Comment: Thanks for clarifying the reason for using a unified policy for all types of agents. I'd like to keep my score as strong accept regarding the large-scale RL training, strong performance, and extensive experiments on multiple benchmarks of the proposed method.
null
null
null
null
null
null
null
null
Rethinking the Stability-Plasticity Trade-off in Continual Learning from an Architectural Perspective
Accept (poster)
Summary: This paper tackles the problem of offline Class Incremental Learning but leverage a a dual architecture strategy. The goal is to design one architecture that would be more plastic (focus on new knowledge) and another that would be more stable (focus on older knowledge) and combine both capabilities during training through distillation. Notably, the author show via experiments that deep models are more plastic and wider models are more stable. The authors apply such strategy with various state-of-the-arts methods and demonstrate superior performances in two datasets. ## update after rebuttal The identified limitation of computation is indeed important. Hopefully the authors provided interesting additional experiments demonstrating that the method is still valuable, but is more of a tradeoff than previously presented. Interestingly, the achieved tradeoff is more better than previous method leverage multiple models and similarly, leveraging the same computation gives overall better performances. Eventually, in any cases, such approach is valuable for inference. In that sense, given that the authors include the experiments provided in the rebuttal, and refine the presentation of the method including, making more apparent the drawback between training computation, inference optimization, and memory usage, I recommend accepting the paper. In that sense I raised my score to 4. Claims And Evidence: - Given the standard deviations in Table 1, the conclusions in 3.2 regarding using a deeper network are not very convincing - I deeply appreciate that the code is shared - Other claims are convincing Methods And Evaluation Criteria: - As shown in a previous study [1], the forgetting metric can advantage more plastic model, so it would also be interesting to consider the relative forgetting metric [1] [1] Wang, Maorong, et al. "Improving plasticity in online continual learning via collaborative learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Theoretical Claims: This paper does not contain any theoretical claim. Experimental Designs Or Analyses: See above. Supplementary Material: I checked the supplementary, but not carefully. Relation To Broader Scientific Literature: Key contributions are: - helping continual learning practitioners design more efficient models in continual learning - giving hindsight on the architectural design of continual learners Essential References Not Discussed: Additionnal methods employing several learners already exist and should be discussed in the current paper. Especially in the case of [2, 3], the authors similarly have a double architecture that focuses on stability and plasticity, so I believe such work should be mentioned. [2] Michel, Nicolas, et al. "Rethinking Momentum Knowledge Distillation in Online Continual Learning." Forty-first International Conference on Machine Learning. [3] Lee, Hojoon, et al. "Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks." International Conference on Machine Learning. PMLR, 2024. Other Strengths And Weaknesses: - The proposed approach leverages fewer parameters but it is unclear in terms of computation since it is not computable in parallel. Similarly, the distillation loss term increases computation. I believe sharing computation time is necessary, in addition to the FLOPs given in the appendix. - I really like the idea of the paper, however, I have a hard time understanding how the overall training procedure occurs, since both models are optimized. A pseudo-code would really make this procedure clearer in my opinion. In that sense, I checked the code given by the authors, and I am now even more confused. In the code shared, if we check for example the file `models/icarl_t.py`. In the `_init_train()` and `_update_representation()` functions, I can see **two optimization loops of 100 epochs, for each model**. One at line 215 and one at line 248, for `_update_representation()`. This is very important and not mentioned in the paper, from my understanding. Similarly, it impacts the training time. This is for me the main weakness of the paper. Other Comments Or Suggestions: See above. Questions For Authors: - How about model ensemble for inference? Have you tried? I understand that it would increase computation but this might also lead to superior performance. - Why swapping stab and pla models leads to a strong drop in performances, as observed in the last row of Table 3? - In section 5.4 the authors claim a reduction in parameters count of 87\% and 81\%, however in the table the reduction is of about 50\% for the considered methods. Could you elaborate on this? - What does this "Dynamic Arcchitecture" part mean in Figure 2? Is it because the classification head grows during training? I do not find it very clear at the moment. - Could you give more details on how the hyper-parameters are chosen? Did you conduct any search? Do you use the exact same HP for the baseline and baseline+DualArch? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer 2FFZ for the recognition of our insightful empirical findings and interesting idea. We are also grateful for the valuable and constructive feedback. > Conclusions regarding deeper network Our main conclusion is that "existing architectural designs typically exhibit good plasticity but poor stability, while their wider and shallower variants exhibit the opposite traits" (contribution 1). We acknowledge that further increasing depth further does not significantly boost plasticity, so our plastic network's depth aligns with ResNet-18 (Sec. 4.3). We'll revise Sec. 3.2 to clarify this and avoid ambiguity. >Relative forgetting While the Average Forgetting is widely used, we agree that *relative forgetting (RF)* [1] offers a fairer stability measure. We've added RF to our analysis. Results on Tab. 1 (partial) support our original claims. | Depth-Width | Penu. Layer | RF↓ (%) | | ----------- | ----------- | ------------- | | 18-64 | GAP | 41.09 | | 10-96 | GAP | 39.15 (-1.94) | | 18-64 | 4x4 AvgPool | 39.97 (-1.12) | > Essential References We thank the reviewer for highlighting these relevant works. We've updated Sec. 2.3 to include a discussion of [2] and [3]. Specifically: (*italicized* indicates new content): "..., certain works utilize two learners (known as slow and fast learners) with different functions to achieve CL. *Among them, MKD [2] and Hare & Tortoise[3] employ techniques such as exponential moving averages to integrate knowledge across two models, effectively balancing stability and plasticity.* Our proposed solution ..." > Computation time *Training*: Despite reduced FLOPs (A.1), training time may increase due to non-parallelism. We note that this common trade-off in multi-model methods (e.g., [1]) is justified by the performance gains. As shown in the below (CIFAR100/10, minutes), our methods require 1.39× to 1.77× time cost for training. We'll include this analysis in this revision. | Method | iCaRL | WA | DER | Foster | MEMO | | --------- | ----- | ---- | ---- | ------ | ---- | | Baseline (ResNet18) | 40 | 39 | 74 | 93 | 49 | | w/ Ours | 70 | 69 | 106 | 129 | 85 | *Inference*: Using only the lightweight main model during inference, our method reduces computational cost, showing a 76% faster speed. | Models | Frame per second ↑ | | -------- | ------------------ | | Baseline | 5452 | | w/ Ours | 9604 (+76%) | > Training procedure As suggested, we'll include pseudo-code to clarify the training and testing procedure. Brief process is as follows: 1. Initialize the stable main learner $\theta_0$ and plastic auxiliary learner $\phi_0$). 2. For each new task t (1 to N): - **Train plastic learner**: Update $\phi\_{t-1}$ for E epochs using normal classification loss to get $\phi_{t}$, then freeze and save it. - **Train stable learner**: Update $\theta_{t-1}$ for E epochs with the plastic learner's assistance and CL method (loss Eq. 1) to get $\theta_t$. - **Test stable learner**: Evaluate $\theta_t$ on all previous tasks (1 to t). We clarify that our method is **fully transparent**, though some details may need elaboration: 1. Sequential training of two models is mentioned in Sec. 4.4 (lines 257-258); 2. Fairness is ensured by comparing total FLOPs of both models against single-model baselines in A.1. > Model ensemble We tried ensemble techniques (e.g., exponential moving average) but found them challenging to implement across differing architectures. We believe they hold promise for integrating knowledge and will explore them in our future work. > Performance of swapping sta and pla networks The stable and plastic learners have distinct roles (main and auxiliary) in learning, so swapping their architectures degrades performance. This shows the need for dedicated architectures suited to each learner's function. > Parameters count in Sec. 5.4 "87%" and "81%" refer to Fig. 3, where our method remain effective under stricter parameter constraints. > Dynamic Architecture in Fig. 2 "Dynamic Architecture" refers to integrating our method with dynamic architecture techniques by expanding the stable learner’s network only. This has been clarified in the revision. > Hyper-parameters (HP) For fairness and reproducibility, we use the same HP for both baseline and baseline+DualArch without specific tuning. HPs for CL methods follow the PyCIL library [4], with epochs and optimizer scheduler adjusted for consistency across all methods, as noted in Sec. 5.1. **References:** [1] Improving plasticity in online continual learning via collaborative learning. CVPR 2024. [2] Rethinking Momentum Knowledge Distillation in Online Continual Learning. ICML 2024. [3] Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks. ICML 2024. [4] Pycil: a python toolbox for class-incremental learning. 2023 --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed feedback and transparency. > We clarify that our method is fully transparent, though some details may need elaboration: > 1. Sequential training of two models is mentioned in Sec. 4.4 (lines 257-258); > 2. Fairness is ensured by comparing total FLOPs of both models against single-model baselines in A.1. I did miss 1., thank you for the clarification. I agree that 2. is present but as mentioned by myself and other reviewers, computation is more relevant here and is indeed largely impacted by the proposed approach, at least for training. One advantage is the reduced computation for testing I suppose. Another is the reduced memory footprint. I appreciate the honesty to fully include training time. Now I wonder, how does training time of DualArch compare with ArchCraft? Is DualArch more computation-heavy but leverages fewer parameters? Additionally, what happens if the number of **overall epochs** are the same? So instead of having 100 epochs per sub-model (total 200) you have 50 per sub-model (total 100). How does this compare to training a single model 100 epochs? The introduced approach is eventually a tradeoff between memory footprint, training time, and inference speed, which remains to be clearly stated in the current version of the paper. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer 2FFZ for the valuable feedback on our response. > How does training time of DualArch compare with ArchCraft? DualArch exhibits lower training time (minutes) compared to ArchCraft. This demonstrates that DualArch is lightweight in both computation and parameter counts when compared with ArchCraft. We will include these results in the revised manuscript. | Method | iCaRL | WA | DER | Foster | MEMO | | --------- | ------ | ------ | ------- | ------- | ------ | | ArchCraft | 85 | 86 | 184 | 186 | 107 | | w/ Ours | **70** | **69** | **106** | **129** | **85** | > What happens if the number of **overall epochs** are the same? As suggested, we conducted an ablation study where we **halved the training epochs** of DualArch to match the total epochs of the baselines (on CIFAR100/10). The results show that DualArch with reduced epochs achieves comparable performance, while reducing training times for most of methods. We appreciate this insightful comment and will include these findings in the revision. **Results of LA % ↑:** | Method | iCaRL | WA | DER | Foster | MEMO | | ------------------------- | --------- | --------- | --------- | --------- | --------- | | Baseline (ResNet18) | 54.87 | **56.98** | 61.83 | 58.67 | 58.46 | | w/ Ours + **Half epochs** | **57.00** | 55.65 | **63.83** | **61.84** | **64.17** | **Results of AIA % ↑:** | Method | iCaRL | WA | DER | Foster | MEMO | | ------------------------- | --------- | --------- | --------- | --------- | --------- | | Baseline (ResNet18) | 68.30 | **69.16** | **72.48** | 69.95 | 70.71 | | w/ Ours + **Half epochs** | **68.31** | 65.56 | 71.42 | **71.60** | **71.09** | **Training time (minutes) ↓:** | Method | iCaRL | WA | DER | Foster | MEMO | | ------------------------- | ------ | ------ | ------ | ------ | ------ | | Baseline (ResNet18) | 40 | 39 | 74 | 93 | **49** | | w/ Ours + **Half epochs** | **36** | **35** | **55** | **63** | 53 | > The introduced approach is eventually a tradeoff between memory footprint, training time, and inference speed, which remains to be clearly stated in the current version of the paper. As suggested, we will incorporate a detailed discussion of the tradeoffs between memory footprint, training time, and inference speed in this revision. This will include an analysis of both the advantages and limitations of our approach, highlighting the key gains as well as potential challenges. We thank Reviewer 2FFZ for the constructive comments again, which have significantly enhanced the clarity and overall quality of our paper.
Summary: This paper studies the stability-plasticity trade-off in continual learning from an architectural perspective. It finds that increasing depth improves plasticity, while increasing width enhances stability. Motivated by this, it proposes a dual-architecture framework, DualArch, comprising two distinct networks dedicated to plasticity and stability, respectively. The proposed framework can be incorporated into existing continual learning methods to enhance performance while remaining significantly parameter efficient. ## update after rebuttal After the rebuttal, my primary concerns regarding scalability and computational efficiency remain unaddressed. The experimental validation continues to rely on limited-scale settings, with the ImageNet-1K results offering minimal insight due to significant downsampling. Additionally, the reported increase in training time relative to baselines, even on smaller datasets like CIFAR-10/100, raises further concerns about the method’s practicality in real-world, large-scale scenarios. Therefore, my assessment of the paper remains unchanged. Claims And Evidence: The paper provides evidence supporting the efficacy of DualArch in small-scale continual learning settings. It demonstrates that DualArch enhances both plasticity and stability, thereby improving overall performance. However, I have the following concerns: 1. **Compute Overhead**: While DualArch claims to be parameter efficient, it is unclear how it enhances computational efficiency given it involves training two models in two distinct training stages. This setup seemingly incurs approximately 2x compute overhead compared to training a single network. A direct comparison in terms of compute cost (e.g., number of updates or FLOPs) would provide more insight. 2. **Scalability**: The proposed framework is only evaluated on datasets with a maximum of 100 classes. It remains unclear how DualArch would perform on larger datasets like ImageNet-1K or longer task sequences. Methods And Evaluation Criteria: The forgetting metric (as presented in Figure 1a and tables) appears problematic since it is inherently model-dependent. A robust forgetting evaluation typically compares the model's performance against a universal upper bound (a jointly trained model on the entire dataset/sequence). Without this upper bound, the forgetting metric lacks interpretability. For example: - **Model 1**: Learns less, forgets less → Final accuracy: $x$ - **Model 2**: Learns more, forgets more → Final accuracy: $y$ - **Joint model**: Upper bound accuracy: $z$ In some cases, $z - x > z - y$, meaning that Model 2 actually learns and retains more knowledge despite higher forgetting. Hence, the current forgetting metric can be misleading. Additionally, the paper does not report computational efficiency (e.g., FLOPs, training time) compared to a single-model baseline or training-from-scratch models (joint models). This is critical for real-world continual learning applications where a continual learner should be more efficient than the jointly trained model. Theoretical Claims: Not applicable Experimental Designs Or Analyses: The experiments appear reasonable but do not study *scalability* to larger datasets or tasks. Given the proposed method's reliance on training two models, computational overhead could significantly increase in large-scale tasks. Supplementary Material: I reviewed most of the supplementary material, which supports the main findings. Relation To Broader Scientific Literature: While prior studies separately identify that wider network favors stability or deeper one enhances plasticity, there exists a gap to unify these findings. This paper unifies stability and plasticity from an architectural perspective, which may be informative for the research community. Essential References Not Discussed: The paper seems to discuss relevant prior work adequately. Other Strengths And Weaknesses: Striking a better stability-plasticity trade-off from an architectural perspective seems intriguing. The idea presented in the paper seems interesting, but performing more rigorous experiments will make the paper stronger. In particular, compute efficiency and ability to scale are important criteria to align continual learning with real-world applications. Other Comments Or Suggestions: Please see my comments above regarding weaknesses. Questions For Authors: 1. How does DualArch perform on datasets with more than 100 classes, such as ImageNet-1K or subsets? 2. Are the plastic and stable networks initialized with random or pre-trained weights? 3. Can you provide a comparison of the compute cost (in FLOPs or training time) compared to training a single model? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank Reviewer Kz16 for the recognition of our intriguing research perspective and interesting idea. We are also grateful for the valuable and constructive feedback. > Computation overhead. We would like to clarify that the total FLOPs of two models (Sta-Net and Pla-Net) in Dual-Arch is lower than the baseline, as detailed in Appendix A.1 (lines 614-617). For instance, on CIFAR100, Sta-Net and Pla-Net require 255M and 241M FLOPs, respectively, totaling 496M—less than the 558M FLOPs of ResNet-18. We thank the reviewer for the valuable suggestion and will emphasize this point in the revised main text for clarity. *Training time*: While the total FLOPs are reduced as shown in A.1, we acknowledge that the actual training time may increase due to non-parallel training. We note that this trade-off is common in multi-model methods (e.g., [1]) and is justified by the performance gains. As shown in the below CIFAR100/10 experiments, our methods require 1.39× to 1.77× time cost for training. We will include this analysis in this revision. | Method | iCaRL | WA | DER | Foster | MEMO | | ------------------- | ------ | ------ | ------- | ------- | ------ | | Baseline (ResNet18) | 40 min | 39 min | 74 min | 93 min | 49 min | | w/ Ours | 70 min | 69 min | 106 min | 129 min | 85 min | *Inference time*: Since our approach utilizes only the main model (a stable learner) during inference, which is more lightweight compared to the baseline ResNet-18, the computational cost during inference is significantly reduced. Below are the results on ImageNet, demonstrating that our method achieves a 76% increase in inference speed. | Models | Frame per second ↑ | | ------------------- | ------------------ | | Baseline (ResNet18) | 5452 | | w/ Ours | 9604 (+76%) | > Long task sequences and large datasets. *Large datasets:* CIFAR100 and ImageNet100 are common benchmarks in continual learning (CL) research[1-3]. To further address the reviewer's concern, we extend our evaluation to large-scale ImageNet-1K/10 tasks. Following [4], we use the 32×32 downsampled ImageNet-1K for efficiency, with a replay buffer size of 20,000 as in [5]. Results show our method remains effective, achieving gains of +3.37% and 5.11%, indicating its strong scalability. | Methods | AIA /% | | ------- | ----------------- | | iCaRL | 24.25 | | w/ Ours | **27.62 (+3.37)** | | WA | 22.42 | | w/ Ours | **27.53 (+5.11)** | *Long task sequences:* Tab. 7 (Appendix A.5) demonstrates our method’s superior performance on CIFAR100/50, a benchmark with long task sequences (50 tasks). The results below highlight consistent improvements of our method over baselines. | Method | LA (%) | AIA (%) | | ------- | --------- | --------- | | iCaRL | 45.30 | 63.99 | | w/ Ours | **48.95** | **65.93** | | WA | 42.12 | 58.26 | | w/ Ours | **47.13** | **64.41** | > Forgetting metrics. We would like to clarify that (final) average forgetting (FAF) is a well-established metric in continual learning for measuring stability [4, 5]. Regarding the reviewer's suggestion to compare the model's performance against a universal upper bound, we respectfully note that such an evaluation may represent overall performance rather than forgetting. Since the upper bound is shared across methods, the resulting metric would primarily reflect final/last accuracy (LA), which inherently combines both plasticity (AAN) and forgetting (FAF), as approximated by $LA \gtrapprox AAN - FAF$. > Initialization of networks. All models are randomly initialized, consistent with baseline methods. We will add a pseudo-code in this revision to clarify this point. Here is a brief process description: 1. Randomly initialize the stable learner (main model, $\theta_0$) and plastic learner (auxiliary model, $\phi_0$). 2. For each new task t (1 to N): - **Train plastic learner**: Update $\phi\_{t-1}$ for E epochs using normal classification loss to get $\phi_{t}$, then freeze and save it. - **Train stable learner**: Update $\theta_{t-1}$ for E epochs with the plastic learner's assistance and CL method (loss Eq. 1) to get $\theta_t$. - **Test stable learner**: Evaluate $\theta_t$ on all previous tasks (1 to t). **References:** [1] Rethinking Momentum Knowledge Distillation in Online Continual Learning. ICML 2024. [2] Resurrecting old classes with new data for exemplar-free continual learning. CVPR 2024. [3] Incorporating neuro-inspired adaptability for continual learning in artificial intelligence. Nature Machine Intelligence 2023 [4] Loss of plasticity in deep continual learning. *Nature* 2024. [5] Pycil: a python toolbox for class-incremental learning. 2023. [6] A comprehensive survey of continual learning: Theory, method and application. IEEE TPAMI 2024. [7] Class-incremental learning: A survey. IEEE TPAMI 2024. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will maintain my original score. **Additional Comments** I'm sorry for not clarifying my concerns. Previously, I posted my additional comments using "official comment", which was not visible to the authors. I have just realized that. Hence, I am adding my additional comments here. My primary concerns revolve around scalability and efficiency, which are critical for real-world applications. While toy settings can be useful for demonstrating proof of concept, they provide limited insight into whether a method will be effective in practical scenarios involving a large number of data points, a higher number of classes, high-resolution images, and longer task sequences. Unfortunately, these concerns remain unaddressed in the rebuttal. The ImageNet setting presented is not ideal, as it fails to incorporate the aforementioned criteria. Specifically, downsampling images to $32 \times 32$ significantly limits the evaluation of scalability on high-resolution datasets (e.g., $224 \times 224$ images for ImageNet). Furthermore, compute and memory overhead are crucial factors in large-scale settings, and datasets like CIFAR-100 are insufficient for evaluating such constraints. In CIFAR-10/100 experiments, the proposed method requires $1.39\times$ to $1.77\times$ the training time compared to the baselines. This raises further concerns about its computational efficiency in large-scale scenarios. Ultimately, it is vital to carefully assess whether findings from toy settings will generalize, as many methods that perform well on simplified benchmarks often fail to scale to more complex, real-world datasets. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the response. We sincerely appreciate the reviewer's time and effort in reviewing our work and have incorporated all suggestions into this revision, including: - Clarifying computational overhead comparisons (FLOPs, training/inference time) to address the concern about efficiency. - Extending evaluations to ImageNet-1K as suggested for large-scale validation. - Explicitly stating network initialization details. We believe these revisions have strengthened the manuscript and fully address the reviewer's concerns. We remain open to further suggestions to improve the work.
Summary: The paper investigates the stability-plasticity trade-off in continual learning from an architectural perspective. Through empirical studies, the authors find that depth enhances plasticity while width favors stability. Building on this insight, they propose an approach that leverages two specialized networks with complementary strengths (stability and plasticity) to address this trade-off. Extensive experiments across multiple datasets and continual learning methods demonstrate that the proposed method not only improves performance but also enhances parameter efficiency compared to baselines. ## update after rebuttal After the rebuttal, all of my concerns have been adequately addressed. I have also read other reviewers' comments. I think the experimental validation is already sufficient, based on my knowledge of the continual learning. Regarding training time, I also agree with Reviewer 2FFZ's updated assessment that it is acceptable. For these reasons, I tend to accept this submission. Claims And Evidence: The authors conduct thorough empirical studies comparing different architectural configurations (depth vs. width) in Section 3, providing clear evidence supporting their claims regarding the trade-off between stability and plasticity at the architectural level. The ablation study also effectively support their claims. Methods And Evaluation Criteria: The proposed method presents a simple yet effective solution to the stability-plasticity dilemma by combining the strengths of two specialized architectures. The evaluation criteria, including benchmark datasets and metrics, are consistent with established conventions and well-suited for validating the proposed method. Theoretical Claims: This paper is primarily motivated by empirical studies, so there are no formal theoretical proofs to verify. Experimental Designs Or Analyses: I have examined the implementation details and reviewed the provided source code. The proposed method do achieve a fair improvement compared with the baselines. Supplementary Material: I have reviewed the entire supplementary material, which includes analyses of FLOPs, additional experimental results, and implementation details. Relation To Broader Scientific Literature: While existing research focuses on optimizing weights to address the stability-plasticity trade-off at the parameter level, this study introduces a novel perspective by extending this trade-off to the architectural level. Moreover, by exploring and addressing the inherent conflict between stability and plasticity at the architectural level, the paper distinguishes itself from existing exploration on neural architectures for continual learning. Essential References Not Discussed: The related works section is comprehensive, covering both learning methods and neural architectures for continual learning. To the best of my knowledge, no key related works are missing. Other Strengths And Weaknesses: *Strengths:* 1. The paper provides a novel and insightful perspective on continual learning by highlighting the inherent conflict between stability and plasticity at the architectural level. The empirical finding that depth benefits plasticity while width enhances stability has the potential to inspire future research to enhance continual learning by exploring more balanced architectures. 2. After reading the line 86-107, I found the proposed method is simple yet intuitive. Moreover, its effectiveness in combining the strengths of two specialized architectures is well validated through experiments and analysis. 3. The performance results presented in Table 2 are impressive. The method demonstrates significant and consistent improvements across different continual learning methods and benchmarks, while also achieving substantial gains in parameter efficiency. 4. The paper is well-written, clearly structured, and easy to follow. *Weaknesses:* 1. Recent studies have increasingly focused on replay-free settings. Despite that the effectiveness of the proposed method has been well-demonstrated across various continual learning approaches, further validation in replay-free settings is still beneficial. 2. The discussion of "Architecture-based methods" in the related work section may be somewhat confusing. I recommend the authors clarify why these methods belong under "*Learning Methods for CL*" rather than "*Neural Architectures for CL*." 3. The rules for computing parameter counts are reasonable but differ from those used in certain existing studies. To avoid confusion, I recommend the authors detail these rules in the main paper. Other Comments Or Suggestions: - In line 113, "exploit it" should be corrected to "exploited." - There is inconsistent formatting with "w/ Ours" and "w/ ours." - The font size in Figure 3 is too small. Questions For Authors: Could the authors provide an analysis of the proposed method's effectiveness when combined with state-of-the-art replay-free continual learning methods? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely thank Reviewer MGyK for the recognition of our insightful research perspective, simple yet effective method, and impressive performance improvements. We are also grateful for the valuable and constructive feedback. > Validation on replay-free settings. While our current experiments primarily focus on replay-based continual learning settings, our approach is also compatible with state-of-the-art replay-free methods. To demonstrate this, we evaluate our approach in conjunction with three representative replay-free continual learning methods: LWF [1], ADC [2], and LDC [3] as representative replay-free methods. The results (LA /%) consistently show that our Dual-Arch approach enhances the performance of all baseline methods, underscoring its versatility and effectiveness in replay-free continual learning scenarios. | Method | CIFAR100/20 | CIFAR100/10 | | ------- | ----------- | ----------- | | LWF | 16.98 | 26.63 | | w/ Ours | **20.27** | **33.81** | | ADC | 30.48 | 40.25 | | w/ Ours | **31.90** | **40.69** | | LDC | 31.61 | 41.57 | | w/ Ours | **34.69** | **42.91** | > Architecture-based methods. We appreciate the reviewer’s valuable comments and will revise Sections 2.1 and 2.2 to clarify the distinction between two key concepts. To clarify, architecture-based methods primarily focus on dynamically expanding or allocating networks for each task based on a given basic architecture. In contrast, "Neural Architecture for CL" refers to studies that explore and optimize the basic architecture itself to better suit continual learning objectives. > Parameter counts rules. We thank the reviewer for raising this point. To avoid any confusion, we will update the description to explicitly state that the reported parameter counts include all used networks (not just the main network). > Typo issues and Figure presentation. We sincerely appreciate the reviewer’s attention to detail. All mentioned typos have been corrected, and we have increased the font size in Figure 3 for better readability. We will thoroughly proofread the entire paper to ensure clarity and correctness. **References:** [1] Li, Zhizhong, and Derek Hoiem. Learning without forgetting. IEEE TPAMI 2017. [2] Goswami, Dipam, et al. Resurrecting old classes with new data for exemplar-free continual learning. CVPR 2024. [3] Gomez-Villa, Alex, et al. Exemplar-free continual representation learning via learnable drift compensation. ECCV 2024.
Summary: This paper investigates the stability-plasticity trade-off in continual learning (CL) from an architectural perspective. The authors empirically demonstrate that deeper networks favor plasticity, while wider networks enhance stability under fixed parameter constraints. To address this, they propose Dual-Arch, a framework combining two specialized architectures: a wide/shallow "Sta-Net" for stability and a deep/narrow "Pla-Net" for plasticity. Claims And Evidence: **Generalizability of architectural insights:** Experiments are limited to ResNet variants and small datasets. ViT/MLP results (Tables 4–6) show inconsistent trends (e.g., deeper MLPs improve AAN but harm FAF), weakening the claim. Methods And Evaluation Criteria: **Dataset limitations**: Experiments on CIFAR100/ImageNet-100 may not reflect real-world scalability. No validation on larger datasets (e.g., ImageNet-1K). Theoretical Claims: No, because this paper does not include formal theoretical proofs. The claims are primarily empirical. Experimental Designs Or Analyses: **Architectural trade-off as primary performance driver:** Ablation results (Table 3) show that adding any second model (even non-specialized architectures) improves performance, suggesting gains may stem from dual-model integration rather than architectural specialization. For example, using two Sta-Nets (row 4, Table 3) achieves 72.27% AIA vs. 72.92% for Dual-Arch—only a 0.65% gap. If I understand correctly, in your method design, the auxiliary model is re-initialized as new tasks appear. Then the plasticity of an initialized model is stronger than that of a model that has been trained on a certain task [1]. This can also explain why the algorithm has better performance, because the plasticity of the entire system is improved, but this still has no direct connection with your core proposition. [1] Dohare S, Hernandez-Garcia J F, Lan Q, et al. Loss of plasticity in deep continual learning[J]. Nature, 2024, 632(8026): 768-774. Supplementary Material: In supplementary material, ViT results (Table 4) show modest gains (+2.47% AIA for iCaRL), and MLP results (Table 6) contradict the main claim (deeper MLPs improve AAN but hurt FAF). Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No. Other Strengths And Weaknesses: No Other Comments Or Suggestions: 1. Can you clarify why iCaRL was chosen over simpler baselines for architectural analysis? 2. Can you provide a theoretical explanation (e.g., via NTK or feature learning) for why depth/width affect stability/plasticity? Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer xKgH for the valuable and constructive feedback. > Generalizability of architectural insight / Concern about supplementary material. There may be a misunderstanding regarding Tab. 5 and 6 results: - **Shallower yet wider** ViT (5×49, Tab. 5): Lower AAN/FAF value → **reduced plasticity, improved stability** . - **Shallower yet wider** MLP (3×1050, Tab. 6): Lower AAN/FAF value → **reduced plasticity, improved stability** . - **Deeper yet narrower** MLP (5×680, Tab. 6): Higher AAN/FAF value → **improved plasticity, reduced stability** . These align with ResNet trends (Table 1), supporting our architectural insights (depth -> plasticity, width -> stability). > Performance gains in Tab. 4. We believe that the performance gains on ViT (minimum +2.47%, maximum +5.88%) is not very modest, especially considering the fact that our method reduces 30% parameter counts. > Concern about dataset limitations. CIFAR100 and ImageNet100 are common benchmarks in continual learning (CL) research[1-3]. To further address the reviewer's concern, we extend our evaluation to large-scale ImageNet-1K/10 tasks. Following [4], we use the 32×32 downsampled ImageNet-1K for efficiency, with a replay buffer size of 20,000 as in [5]. Results show our method remains effective, achieving gains of +3.37% and 5.11%, indicating its strong scalability. | Methods | AIA /% ↑ | | ------- | ----------------- | | iCaRL | 24.25 | | w/ Ours | **27.62 (+3.37)** | | WA | 22.42 | | w/ Ours | **27.53 (+5.11)** | > Concern about primary performance driver. We clarify two potential misunderstandings regarding the performance drivers of Dual-Arch. a) The auxiliary (plastic) model’s weights ($\phi_t$) are **not reinitialized** but are **updated incrementally** from the previous task’s weights ($\phi_{t-1}$), as shown in Fig. 2. So the performance gains could **not** stem from the plasticity of a freshly initialized model. We will further clarify this in Sec. 4.4 of this revision. b) Dual-Arch's efficacys stems from two key components as mentioned in Sec. 5.3 (lines 324-325): 1. Dual networks (stable and plastic learner) 2. Dedicated architectures (Sta-Net as stable learner, Pla-Net as plastic learner) While dual Sta-Nets perform strongly (still worse than Dual-Arch), this does not weaken our core proposition. Instead, it highlights the importance of: - Dual networks (dual Sta-Nets still involve stable and plastic learner) - Dedicated architectures (Sta-Net as the stable learner) To further validate the importance of dedicated architectures, we compare against dual ResNet-18 networks. Results (AIA %) show that our specialized architectures achieve better performance with fewer parameters. | Architectures | Parameters (iCaRL) | Avg on Five methods | | ---------------------- | ------------------ | ------------------- | | Dual ResNet-18 | 33.6M | 71.46 | | Ours (Sta-Net+Pla-Net) | 15.1M (↓55%) | 72.94 (+1.48) | > Why iCaRL was chosen? Simpler baselines like Experience Replay (ER) may not fully suit CL evaluation—particularly stability. A key limitation is that without additional CL strategies, performance on old tasks might not primarily rely on model's knowledge preservation. To investigate this, we introduce an ablation on CIFAR100/10: "w/ buffer only", where models: - Train **only on the last five tasks**, - Use an initial replay buffer from the first five tasks, - Evaluate performance on the first five tasks after learning the final task. | Method | Avg on Task1-5 | | -------------- | -------------- | | ER | 38.48 | | w/ buffer only | 36.38 (-2.10) | | iCaRL | 48.64 | | w/ buffer only | 41.40 (-7.24) | Results show the ablation achieves relatively comparable performance on tasks 1–5 to original ER, despite no direct learning. In contrast, iCaRL shows a larger gap (7.24% vs. 2.10%), indicating the model preserves knowledge beyond buffer memorization. This highlights iCaRL’s suitability for evaluating CL architectures beyond ER. > Theoretical explanation. While theoretical analysis (e.g., NTK or feature learning) could offer deeper insights, it is beyond this work’s scope. Existing theories often rely on idealized assumptions (e.g., infinite/very large width or two-layer networks), making extensions to our empirical width/depth trade-offs highly non-trivial. **References**: [1] Rethinking Momentum Knowledge Distillation in Online Continual Learning. ICML 2024. [2] Resurrecting old classes with new data for exemplar-free continual learning. CVPR 2024. [3] Incorporating neuro-inspired adaptability for continual learning in artificial intelligence. Nature Machine Intelligence 2023. [4] Loss of plasticity in deep continual learning. *Nature* 2024. [5] Pycil: a python toolbox for class-incremental learning. 2023.
null
null
null
null
null
null
Fleet of Agents: Coordinated Problem Solving with Large Language Models
Accept (poster)
Summary: The word produces a novel framework, FOA, that employes LLMs agents for dynamic tree search using genetic-type particle filtering approach. The multiple agents provide dynamic branching and adapting the exploration strategy. A major claim is to improve the cost-efficiency of multi-query methods. The paper compares FOA against several benchmarks like IO prompting, CoT, AoT, ToT, GoT, over tasks like Game of 24, Mini Crosswords, WebShop",with favorable results. Claims And Evidence: The claims are fairly evident as the method is developed through thr paper. They are . FOA i Methods And Evaluation Criteria: The benchmark tasks appear appropriate because they require different reasoning, planning, and problem-solving skills. Using success rate (for "Game of 24"), overlap (for "Mini Crosswords"), and average score (for "WebShop") appears to be standard, judging from benchmark results being there. Theoretical Claims: There aren't any. Experimental Designs Or Analyses: This is a design oriented work. But it justifies the choices made with bnchmarking, ablation studies, and cost-quality trade-off and model size and quality analyses. Supplementary Material: The source repo was inaccessible or absent at the provided url. My experience, and perhaps fthere are rights or other problems that don;t generalize. Relation To Broader Scientific Literature: Figure 2 IMV is a broad juxtaposition. Unless the LR is insufficient, I see a good amount of positioning arounf *-of-thought reasoning models. Essential References Not Discussed: I'm not well positioned to enunciate that. Other Strengths And Weaknesses: It is an architectural, engineering work. Unless the culture at icml is shifting, that is peripheral to the former. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their careful and thoughtful review. We appreciate the positive assessment, recognizing the new state-of-the-art performance of our method and its extensive evaluation. We are grateful for the reviewer’s thoughtful engagement, and hope that our clarifications help address the concerns raised. We will be happy to answer any further questions that the reviewer may have and would appreciate it if the reviewer would consider updating their score in light of this response. **Is this mainly an engineering work, and as such, a good fit for ICML?** We believe that the reviewer’s main concern originates from a misunderstanding about the nature of our work, which we clarify below with a point-by-point response. 1. **FoA is a principled algorithm.** We completely agree with the reviewer that our research is built on a significant engineering effort. However, our work also makes a meaningful contribution beyond architectural or engineering work. To the best of our knowledge, the application of genetic particle filtering to LLM-based reasoning processes is novel. We present a principled formulation of the algorithm and provide a deep empirical analysis of its behavior through extensive ablations. 2. **FoA is a runtime and doesn’t rely on crafted prompts.** We would like to highlight that unlike all existing works, we do not craft custom prompts for FOA, but instead, we reuse the prompts (cf. Appx. C.3 for details) provided by ToT (Yao et al., 2024) for the “Game of 24” and “Mini Crosswords” tasks and LATS (Zhou et al., 2024) for the “WebShop” task. This ensures a direct and fair comparison of the reasoning abilities of the benchmarked frameworks and controls for the impact of prompt quality, which is a confounder. (Lines 240-247, paper). Moreover, we would also like to highlight our response to **Reviewer JAyM** under **Comparison with ReST-MCTS\* [NeurIPS’24]**, where FoA was extended to a new benchmark task, SciBench, without any prompt tuning, **corroborating its strength as an algorithm**. 3. **ICML's historical inclination towards Applied ML research.** Finally, we believe our work aligns well with ICML’s long-standing interest in applied machine learning research, particularly work that enables new capabilities, systematically explores trade-offs, and provides reproducible insights grounded in empirical evaluation. Few examples below: 1. Fast support vector machine training and classification on graphics processors (ICML 2008) https://dl.acm.org/doi/10.1145/1390156.1390170 2. Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures (ICML 2013)https://proceedings.mlr.press/v28/bergstra13.html 3. Scaling Laws for Fine-Grained Mixture of Experts (ICML 2024) https://proceedings.mlr.press/v235/ludziejewski24a.html **Access to supplementary material** All artifacts of our research, including code, data, and prompts, are publicly available at: https://anonymous.4open.science/r/FoA-4D83. Our results are fully reproducible. **Note:** Humbly and politely, it is our impression that the reviewer's comment in **Claims and Evidence** is cut-off. We welcome any feedback coming our way. If the reviewer would clarify their intention, we would greatly appreciate it and act upon it. --- Rebuttal Comment 1.1: Comment: Agreeing with the argument. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Mfui, Thanks a lot for taking the time to review our response and agreeing with the points outlined therein. Since the author feedback phase ends tomorrow, it would be great if you could kindly let us know if you require anything further from our side. Alternatively, we would sincerely appreciate it if you would consider updating your score in light of your agreement with our original response. Thanks once again for taking the time and writing a thoughtful review of our work. Appreciate it! Best, Paper 4052 authors
Summary: The paper proposes Fleet of Agents (FOA), a novel multi-agent framework leveraging genetic particle filtering to enhance problem-solving capabilities of Large Language Models (LLMs). FOA achieves improved reasoning quality and significantly reduces computational costs compared to state-of-the-art methods, demonstrating strong performance across various benchmark tasks and different sizes of language models. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Specifically, the authors present extensive experimental results across three diverse benchmark tasks ("Game of 24," "Mini-Crosswords," and "WebShop") using multiple LLMs (GPT-3.5, GPT-4, LLaMA3.2-11B, and LLaMA3.2-90B). Their primary claims regarding FOA’s effectiveness in improving reasoning quality (~5% increase) and substantially reducing computational costs (~60% decrease) compared to state-of-the-art methods are substantiated by detailed comparisons against multiple baselines (e.g., IO, CoT, CoT-SC, AoT, ToT, GoT, RAFA, LASER, LATS) on clearly defined metrics. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are appropriate and make sense for the problem at hand. Theoretical Claims: 1. In line188, right column, ‘, cat. resampling distr.’ are those typos? Could you make this mathematical part clearer, and also ‘iid’ maybe you should specify those preliminaries in your articles? I mean while use those terminology, maybe also write those in lines, to better understand 2. What do you want to express in line 194, right column Experimental Designs Or Analyses: Yes Why RAFA results don’t align with the paper “Reason for Future, Act for Now: A Principled Architectur for Autonomous LLM Agents” , what’s the difference between the origin codebase and your experiment Supplementary Material: Yes, prompts and more experiment results Relation To Broader Scientific Literature: Related to test-time scaling method for LLM reasoning such as COT, TOT Essential References Not Discussed: I would like to see comparison and discussion with ReST-MCTS Other Strengths And Weaknesses: Strengths: 1. **Innovative Method** – The paper introduces a novel approach that advances the current state of the field. 2. **State-of-the-Art Results** – The proposed method achieves competitive or superior performance compared to existing benchmarks. 3. **Cost-Effective Solution** – The approach demonstrates strong performance while maintaining efficiency in terms of computational or resource costs. Weaknesses: **Clarity in Writing** – Some parts of the manuscript, particularly the mathematical formulations, lack clarity and could benefit from improved explanations and notation consistency. Other Comments Or Suggestions: 1. In table 1, if you could not run RAFA, maybe just don’t present it. Questions For Authors: 1. In line 166, how do u define an invalid state, , and how do the resembling of it uniform, it’s not quiet Clear. And since the bullet point here is sudden death, but you don’t mention what’s actually is define as sudden death which is ill writing Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We are encouraged by their overall positive assessment, recognizing the novelty, strong performance, and cost-effectiveness of our method, and the extensive nature of our experiments involving diverse benchmark tasks, multiple baselines, and LLMs of different sizes. We hope to have comprehensively clarified all the concerns of the reviewer with our responses below. We will be happy to answer any further questions that the reviewer may have and hope that the reviewer considers adapting the overall assessment of our paper. **Comparison with ReST-MCTS\* [NeurIPS’24]**: We thank the reviewer for this suggestion. So far, we were able to perform the following new experiments, which, have only strengthened the paper further. Based on the reviewer’s feedback, we are willing to extend ReST-MCTS* to the remaining two tasks within the author feedback period. All new implementations are in our GitHub repository linked from the paper. 1. **ReST-MCTS\* on SciBench**: To ensure a correct understanding of their codebase and setup, we run MCTS* (as shown in the README) for the SciBench benchmark. Not only were we able to reproduce their results (with minor variations due to inherent LLM stochasticity) but also obtained prompts for SciBench, which allowed us to extend FoA to this new benchmark with minimal added efforts. [Table 4](https://pasteboard.co/yZGwwm3MNgY6.png) shows that on average, **FoA outperforms ReST-MCTS\* by obtaining a quality improvement of ~2% while reducing the cost by ~10%**. Once again, we’d like to highlight this point for all reviewers: **we do not craft custom prompts for FoA, but instead, we reuse the prompts provided in the official implementations, corroborating the strength of FoA as an algorithm**. 2. **ReST-MCTS\* on Game of 24**: With our improved understanding of the ReST-MCTS* codebase, we extended it to our Game of 24 benchmark. 1. **Tuning:** Similar to other benchmarked methods in our paper, we tuned the ReST-MCTS* hyperparameters using GPT-3.5 as the base model. We found that the branch parameter $b=5$ and iteration number $T=50$ led to the best success rate (cf. results [tuning b](https://pasteboard.co/GWbNwKHeq2sr.png) and [tuning T](https://pasteboard.co/GfuTCe4zvJqC.png)). 2. **Main results:** [Table 1](https://pasteboard.co/evCa1SRTigqo.png) presents results with GPT-4 as the base model and shows that FoA substantially outperforms ReST-MCTS* by being **\~40% better in quality while requiring only \~50% of the cost**. Owing to its exorbitant cost (~300 USD), we could not run ReST-MCTS* with $T=50$, and the presented results instead use $T=10$, thereby providing ReST-MCTS* with a similar cost budget as other baselines. **RAFA results**: We thank the reviewer for bringing this point up. We used the official implementation of RAFA provided by the authors. The difference in the results reported in our and the original RAFA paper stems from a **difference in the definition of the success rate evaluation metric**. RAFA used a relaxed version (cf. footer 1, page 50, ICML’24 camera-ready) of the success rate metric. However, for a fair comparison across all the benchmarked methods (ToT, GoT, RAFA, and FoA), we used the original implementation introduced in the ToT paper for all the baselines. As shown in [Table 5](https://pasteboard.co/5LIOBtxZKdmc.png), **using the relaxed success rate metric, our results align well** with those reported in the original RAFA paper. **Clarity in writing**: We thank the reviewer for their suggestions regarding writing clarity. Following is our point-by-point response to answer their concerns. We will utilize the extra page available for the final version to weave these details more cleanly into the main text. 1. **Firming up mathematical formulations.** In our theoretical discussion (lines 185-194), we describe a resampling mechanism with replacement, where samples are drawn i.i.d. from a categorical distribution. We completely agree with the reviewer’s suggestion and will fix the equations by avoiding abbreviations in the notation, including terms such as ‘i.i.d.’ in preliminaries, and providing textual explanations of the equations. 2. **Invalid states and Sudden Death.** Depending on the task, some solution attempts may result in invalid states. For example, in a crossword puzzle, an agent might propose a word with the wrong number of characters, leading to an ill-defined state. It is standard practice, across algorithms (e.g. ToT, GoT), to prevent the search from continuing in such cases. For FoA, we use a simple filtering mechanism: when an agent enters an invalid state, it is deleted, and another agent is randomly duplicated to fill the gap. This is what we refer to as sudden death. The replacement agent is chosen uniformly at random, avoiding additional calls to the value function. We will include the aforementioned short description in Sec. 3.2 of the paper in the final version.
Summary: The paper introduces Fleet of Agents , a framework that coordinates multiple LLM agents using a genetic-type particle filtering approach to optimize the balance between exploration and exploitation in problem-solving tasks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: Good Essential References Not Discussed: No Other Strengths And Weaknesses: Weakness: 1. I believe the three tasks are easy and a little bit too old. 2. Current FOA implementation uses a fixed number of agents per task, while an adaptive mechanism could improve efficiency. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and overall positive assessment. We hope to have comprehensively clarified all the questions and concerns of the reviewer with our responses below. We will be happy to answer any further questions that the reviewer may have and hope that the reviewer considers adapting the overall assessment of our paper. **Are the evaluation tasks adequate?** We thank the reviewer for bringing this point up. Following is our point-by-point response to answer their concern. 1. **Vetted benchmarks:** We believe that we have chosen a **judicious mix** of tasks. It is important that these tasks are well established in the literature, primarily to enable meaningful comparisons with prior work. Moreover, a recent [survey on LLM agents]((https://paperswithcode.com/paper/survey-on-evaluation-of-llm-based-agents#datasets)) highlights that our tasks are among the most commonly used in the field and remain representative of current evaluation standards, a point also recognized by the other two reviewers. 2. **Tasks appear easy.** While the tasks may appear easy from their description, they continue to pose a significant **challenge**, even to powerful proprietary methods. For example, using a carefully designed **CoT prompt, GPT-4 solves only 6%** of the GameOf24 challenges (our first task). Moreover, as portrayed by additional evaluation of open-source models in the appendix, our choice of tasks poses a **significant challenge for these smaller models**. For example, as shown in Table 8 in the appendix, while most methods achieve a high "average score" (>0.5) using GPT-3.5, only FoA portrays strong performance (average score ~0.75) with Llama 3.2-11B, whereas other methods struggle. This result corroborates the difficulty of the tasks and the strength of our FoA framework to work well with small open-source models. 3. **New task.** Finally, given the suggestion from **Reviewer JAyM**, we were able to successfully add results (cf. our response to **Reviewer JAyM** under **Comparison with ReST-MCTS\* [NeurIPS’24]**) on a new benchmark task, **SciBench**. [Table 4](https://pasteboard.co/yZGwwm3MNgY6.png) shows that on average, FoA outperforms ReST-MCTS* by obtaining a quality improvement of \~2% while reducing the cost by \~10%. **Would it be possible to use a dynamic number of agents in an adaptive fleet-size mechanism?** We completely agree with the reviewer that this is an interesting question and a great idea. In fact, we already highlighted this point under the limitations section (Section 7.3) of our paper. In the same section, we also bring up the concept of an adaptive fleet size as future work. Overall, we hope that our answers have satisfied the reviewer. We are confident in the strength of our contributions and believe that the experimental setup and results amply demonstrate the relevance and rigor of our work. --- Rebuttal Comment 1.1: Comment: No further questions. I will keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer RBUc, Thanks, for your time and consideration! Best, Paper 4052 authors
null
null
null
null
null
null
null
null
Self-supervised Adversarial Purification for Graph Neural Networks
Accept (poster)
Summary: The paper proposes a method to defend GNNs that is based on a separate GNN classifier and a purifier. The main idea is to decouple the classifier and purifier and learn a multi-step purifier using generalized pagerank. Extensive experiments are provided showing that this approach outperforms state-of-the-art defenses or purification methods. Claims And Evidence: Most of the major claims are well supported. However, I feel some claims are rather vaguely supported / evidenced. **(I)** They claim to contribute a decoupling into two components: a classifier for accuracy and a purifier to restore graph structure. But aren't the static approaches such as Jaccard-GCN or SVD-GCN also already "decoupled" in the sense of first preprocessing the graph and then applying a GNN on top? **(II)** In the abstract, the method is framed as a solution to overcome the robustness-accuracy tradeoff. While empirically the trade-off seems slightly better than in the compared to methods, claiming a perfect balance of both would require more extensive also theoretic study. Methods And Evaluation Criteria: Yes. Theoretical Claims: I checked the correctness of all theoretical claims and went through the proofs in the Appendix. **Issues** **(I)** Sec 2.2 one could be explicit about what the expectation over is meant. E.g. the general objective is to minimize the expected loss over the distribution of graphs that could be sampled conditioned on the already given graph. However, for Proposition 3.1 to hold e.g., the empirical risk is used, as this is the one that can actually be used for training a model on a concrete dataset. This is no criticism that the empirical risk is used, just that one could be explicit in the main draft. **(II)** Given a fixed perturbation Proposition 3.1 holds true. However, during the attack generation for adversarial training, each iteration will have different $\mathcal{V}_\text{unaffected}$ and $\mathcal{V}_\text{affected}$. Thus, while I do understand the motivation for the approach, I'm not 100% convinced of the argumentation. But as it is "just" a motivation, this may be enough. **(III)** Could you clarify the Lipschitzness of your proposed updating scheme? Crucially, Thm. 3.2 requires $L < 1$. Experimental Designs Or Analyses: Yes, all experiments/analyses in the main draft. ## Strengths * Follow state-of-the-art evaluation protocols for evasion attacks (e.g., inductive setting) * I really like Table 1. The number of experiments and comparisons per dataset are impressive. They clearly show the effectiveness of GPR-GAE compared to adversarial training and other robust GNNs. * Ablation study on the many design choices provided. * Code provided. ## Weaknesses **(I)** As all homophilic datasets are citation datasets, I would be curious about the performance on a homophilic non-citation datasets e.g. WikiCS. **(II)** Given Cora is the main dataset for a detailed comparison between methods in the main draft, I would have expected Table 2 to include the Cora dataset (or that other purification methods have also been employed on Cora). **(III)** All results are presented with standard deviations but I can't find a description about how many random seeds have been used. Supplementary Material: I reviewed the appendix, but not the code associated to the submission. Relation To Broader Scientific Literature: The work contributes a superior purification method compared to previous static approaches. Furthermore, they show that their approach outperforms state-of-the-art defenses. Essential References Not Discussed: I do think that all essential references are discussed. Other Strengths And Weaknesses: ## Weaknesses * The method seems to have many hyperparameters, e.g. $\tau, p, q, \eta, \delta$, #purification steps and other design decisions. However, the authors provide ablations. Other Comments Or Suggestions: See questions. Questions For Authors: * Is the edge injection step a uniform sampling from the complementary edge set? Furthermore, please make explicit how you sample from $\mathcal{E}^C$. * Make explicit if the $L_\text{restore}$ loss is maximized or minimized. Furthermore, as far as I understood how $L_\text{restore}$ is currently defined in the text it should be maximized so as to make sense (minimization would be trivial bei outputting an $A_{ij}$-prediction around $0$ for an $\{i,j\} \in \mathcal{E}$ and that is exactly not what we want). However, $L_\text{sym}$ should be minimized. Thus, $L_\text{restore}+\delta L_\text{sym}$ doesn't make sense with the current definitions. Is there a sign-error in the current presentation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive comments. We would like to address your concerns (C) with the following responses: --- > **C1.** Aren't the static approaches such as Jaccard-GCN or SVD-GCN also already "decoupled"? Unlike Jaccard-GCN and SVD-GCN, which apply fixed heuristics for preprocessing, our contribution explicitly motivates **decoupled learning**: the purifier (GPR-GAE) independently learns graph reconstruction via a self-supervised objective, while the classifier focuses solely on accuracy. Thus, we advocate decoupling not merely in preprocessing steps, but crucially within the learning process itself. --- > **C2.** Claiming a perfect balance of both trade-off terms would require more extensive also theoretic study. We thank the reviewer for highlighting this point. To clarify, when we say "overcome" in abstract, we refer to the constraints of traditional methods intertwining accuracy and robustness within one classifier. Proposition 3.1 theoretically decomposes these conflicting objectives, motivating our design of specialized modules that independently handle each—especially robustness. While we fully agree that claiming a perfect balance requires further theoretical study, our empirical results show a substantial improvement in **mitigating** the trade-off. We will ensure this nuance is clearly articulated in future revisions. --- > **C3.** Sec 2.2 one could be explicit about what the expectation over is meant. We thank the reviewer for the advice and will explicitly state that the expectation in Sec 2.2 refers to the empirical risk over nodes in the given graph. --- > **C4.** Given a fixed perturbation Proposition 3.1 holds true. However, during the attack generation for adversarial training, each iteration will have different $V_{\text{unaffected}}$ and $V_{\text{affected}}$. We thank the reviewer for this insightful remark. To clarify the notation, we will explicitly denote the node sets as $V_{\text{unaffected}}^{G'}$ and $V_{\text{affected}}^{G'}$ to reflect their dependence on each perturbation $G'$ generated during adversarial training. Importantly, this notation refinement does not affect our core claim: each training iteration still highlights the inherent tension between accuracy and robustness, reinforcing our rationale for decoupling these conflicting objectives into separate specialized modules. --- > **C5.** Could you clarify the Lipschitzness of your proposed updating scheme? Crucially, Thm. 3.2 requires $L < 1$. This is a valuable question that we are happy to clarify. The Lipschitz condition $L < 1$, required by Theorem 3.2 for convergence of the purification update scheme, is encouraged by our training objective: minimizing $\mathcal{L}(\theta) = \ell(f_{\theta}(G'), G)$, where $G'$ is a perturbed version of the clean graph $G$. Since $G$ is fixed during training and the model learns to satisfy $f_{\theta}(G) \approx G$, the loss encourages the condition $\||f_{\theta}(G') - f_{\theta}(G)\|| < \||G' - G\||$, i.e., $f_{\theta}$ behaves as a contraction mapping around $G$, approximating a Lipschitz constant $L < 1$ under a sufficient perturbation budget. This ensures stable and convergent updates in the multi-step purification process, as required by Theorem 3.2. --- > **C6.** Additional experiment on WikiCS and Cora (Table 2) | Model | WikiCS (clean/0.25/0.5) | Cora (clean/0.25/0.5) | |--|-|-| | GCN (Vanilla) |75.0 / 29.0 / 24.5| 79.4 / 46.9 / 29.3| | EvenNet | **76.7** / 35.8 / 31.9| **81.4** / 65.9 / 56.4 | | SoftMedianGDC| 73.5 / 33.5 / 30.7 | 77.4 / 62.3 / 53.6| | Jaccard-GCN | 73.4 / 58.8 / 52.5 | 78.5 / 64.1 / 50.9 | | SVD-GCN | 73.0 / 42.3 / 36.9 | 77.2 / 64.8 / 50.2 | | GOOD-AT | 74.1 / 58.7 / 51.2 | 77.7 / 65.4 / 61.0 | | **$GPR\text{-}GAE_{GCN\text{-}Vanilla}$** |75.1 / **66.8 / 63.0** | 79.4 / **75.1 / 72.6** | --- > **C7.** How many random seeds? We used 5 random seeds across all experiments. --- > **C8.** Seems to have many hyperparameters. Although our method involves several hyperparameters, we deliberately keep most of them fixed across all datasets, demonstrating that our approach generalizes well without extensive tuning. --- > **C9.** Is the edge injection step a uniform sampling from the complementary edge set? Yes, the edge injection step involves uniform sampling from the set of non-existent edges $\mathcal{E}^C$. Specifically, we first calculate the number of edges to inject as $|\mathcal{E}_{\text{inject}}| = p \cdot |\mathcal{E}|$, and then uniformly sample this number of edges from $\mathcal{E}^C$ for injection. --- > **C10.** Make explicit if the $L_{restore}$ is maximized or minimized. Thank you for the correction — you are absolutely right. The current loss formulation is misleading as it implies maximization, which contradicts our intended objective of minimizing the restoration loss. We appreciate your correction, and we will adjust the sign and clarify the explanation in the revision. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal. It addressed most of my concerns and thus, I'm increasing my score. I think it would be particularly valuable to include a discussion on why the assumptions on Thm 3.2 should be satisfied by the method into the camera-ready version (next to the other points in the rebuttal). --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their thoughtful reconsideration and constructive feedback. We appreciate the suggestion regarding Theorem 3.2 and agree that a discussion of its assumptions would help improve the clarity of the work. We will make sure to incorporate this point alongside the other clarifications raised during the review process. If there’s anything else we can clarify or expand on, we’d be more than happy to help.
Summary: This paper studies the robustness of GNNs against adversarial attacks from the perspective of adversarial purification. The authors introduce a self-supervised adversarial purification framework that preprocesses input data to remove adversarial perturbations before classification. Experimental results on a wide range of graph datasets demonstrate that the proposed method achieves state-of-the-art robustness while maintaining high accuracy. Overall, the proposed approach offers a promising direction for improving adversarial robustness in GNNs without compromising classification accuracy. Claims And Evidence: The claims made in the submission are generally well-supported by convincing evidence. Methods And Evaluation Criteria: There are missing comparison with adversarial training based graph defense methods Theoretical Claims: The claim in Proposition 3.1 is clear. However, according to Proposition 3.1, with a deep GNN (e.g., when the number of layers ≥ 3), most nodes in the graph will be affected. My question is, in such a case, does the robustness term become the dominant term in the loss function? If so, will this affect the model’s accuracy? Experimental Designs Or Analyses: See Weaknesses and Questions below. Supplementary Material: There is no supplementary material provided. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Pros: - The motivation of defending GNNs against adversarial attacks using a purification module is reasonable. - The idea of decoupling the learning objective of adversarial training into accuracy and robustness terms is interesting and technically sound. - Experimental results demonstrate the effectiveness of the proposed method compared to baselines. Cons: - Many adversarial training methods are cited but not compared in the experiments. - Larger datasets, such as OGBN-MAG or OGBN-Products, are needed to demonstrate the scalability of the proposed defense. - Although the proposed method can serve as a plug-and-play purification module for various GNN architectures, it is expected to introduce additional computational overhead. The authors should discuss the complexity of the proposed method to evaluate whether it is justified as a purification module. Other Comments Or Suggestions: N/A. Questions For Authors: - The proposed method is effective against evasion attacks, but how does it perform against poisoning attacks? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive comments. We would like to address your concerns (C) with the following responses: --- > **C1.** The claim in Proposition 3.1 is clear. However, according to Proposition 3.1, with a deep GNN (e.g., when the number of layers ≥ 3), most nodes in the graph will be affected. My question is, in such a case, does the robustness term become the dominant term in the loss function? If so, will this affect the model’s accuracy? As you noted, with increased depth, more nodes are affected by perturbations, which could overemphasize the robustness term in conventional adversarial training. This overemphasis can indeed lead to a drop in accuracy due to the model learning mostly from fundamentally corrupted data. This consideration is precisely why we advocate for decoupling accuracy and robustness in our self-supervised adversarial purification framework. The decoupling ensures neither objective overshadows the other, preventing performance degradation. We appreciate you highlighting this important architectural consideration! --- > **C2.** Many adversarial training methods are cited but not compared in the experiments. To the best of our knowledge, [1] is the most recent published work on adversarial training in Graph Neural Networks (GNNs). Notably, it is the first to explicitly address the validity-related limitations of previous adversarial training in the transductive setting, as mentioned in Section 2.1. We therefore focus our comparison on this state-of-the-art approach. | Model | Cora | Citeseer | |---|-|---| | $S^2GC$ [2] |52.3±3.4|50.2±3.6| | $GPRGNN$ [1] |71.4±3.0|65.7±4.4| | $GPR\text{-}GAE_{GCN\_{Vanilla}}$ | 75.1±2.7|68.7±3.1| The table further supports our choice of baseline by reporting the performance of $GPR\text{-}GAE_{GCN\_Vanilla}$ and [1] with GPRGNN as backbone under a non-adaptive PRBCD attack with $\epsilon = 0.25$ in an inductive setting. We also include a previous adversarial training method [2], evaluated with its best-performing backbone, $S^2GC$. Although [2] demonstrates strong results against other adversarial training methods under the transductive setting in its original paper, it significantly underperforms compared to the adversarial training baseline we use [1] under our inductive setup. --- > **C3.** Larger datasets, such as OGBN-MAG or OGBN-Products, are needed to demonstrate the scalability of the proposed defense. We follow prior works on adversarial robustness, which commonly use OGBN-Arxiv as the largest-scale benchmark. For further demonstration, we evaluate the scalability of GPR-GAE on OGBN-Products, which has over 10× more nodes and 50× more edges than OGBN-Arxiv. Due to PRBCD and LRBCD attacks running out of memory on OGBN-Products, we use DICE [3], a non-gradient based attack, to demonstrate the scalable robustness of GPRGAE in the following table. |Model|Clean| $ \epsilon = 0.1 $ | $ \epsilon = 0.25 $ | $ \epsilon = 0.5 $ | |-|-|-|-|-| | $ GCN\_{Vanilla} $ |73.7±1.8|68.3±2.4|62.7±2.0|55.9±2.6| | $GPR\text{-}GAE_{GCN\_{Vanilla}}$ |73.4±2.3|71.5±3.1|69.7±2.5|68.4±2.4| The results indicate that GPR-GAE not only scales efficiently to significantly larger datasets like OGBN-Products but also maintains robust performance under adversarial conditions. --- > **C4.** Although the proposed method can serve as a plug-and-play purification module for various GNN architectures, it is expected to introduce additional computational overhead. The authors should discuss the complexity of the proposed method to evaluate whether it is justified as a purification module. Our multi-step purification process introduces at most a linear overhead, regardless of the GNN classifier architecture used. As detailed in Section 5.4 and Section B, while a K-layer GCN has a node encoding complexity of $O(K · |E| · Z)$, our purification module incurs an extra $O(K · |E| · Z^2)$ cost over a small fixed number of purification steps. This additional cost scales linearly with the number of edges and,in our view, is justifiable by the substantial improvements in adversarial robustness achieved by our method across diverse classifers. For empirical evidence, please refer to our response to Reviewer "kPxh" (C4). --- > **C5.** The proposed method is effective against evasion attacks, but how does it perform against poisoning attacks? Thank you for raising this important point. Our methodology focuses on evasion attacks, as our aim is to address theoretical limitations of adversarial training, which is primarily explored in the evasion setting. While GPR-GAE is not explicitly designed for poisoning, its self-supervised, model-agnostic framework offers potential for extension—an exciting direction for future work. --- [1] Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and New Directions [2] Spectral Adversarial Training for Robust Graph Neural Network [3] Adversarial Attacks on Graph Neural Networks via Meta Learning
Summary: Traditional defense strategies for Graph Neural Networks (GNNs), such as adversarial training, often struggle to balance accuracy and robustness, as they entangle these competing objectives within a single classifier. This paper challenges that approach and introduces a novel self-supervised adversarial purification framework designed to decouple robustness from classification. By incorporating a dedicated purifier that preprocesses input data before classification, the proposed method ensures enhanced resilience against adversarial attacks. Experimental results across multiple datasets and attack scenarios highlight its state-of-the-art robustness, positioning it as an adaptable, plug-and-play solution for fortifying GNN classifiers. ## update after rebuttal The primary concern with this paper is its novelty. Prior to the rebuttal, my stance leaned toward a Weak Reject. During the rebuttal phase, the authors provided some reasonable evidence to support their contributions. However, I still have remaining concerns regarding the novelty of the work. Taking these points into account, I would like to maintain a **Neutral position**. I do **not lean strongly toward either acceptance or rejection during the AC and reviewer discussion phase.** Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No discussions. Experimental Designs Or Analyses: See Other Strengths And Weaknesses. Supplementary Material: None. Relation To Broader Scientific Literature: Contributing to various scientific GNN applications. Essential References Not Discussed: See Other Strengths And Weaknesses. Other Strengths And Weaknesses: ### Pros - The paper addresses an important problem with a clear motivation and concrete analysis. - The writing is well-structured and clear. - Extensive experiments support the effectiveness of the proposed approach. --- ### Cons - Concerns regarding novelty - The core idea of separating the graph purifier and classifier under an evasive setting closely resembles test-time graph transformation [1]. - The proposed GPR-GAE lacks technical novelty. Using GAE as a noisy edge predictor is a well-established approach [2,3], and simply replacing the GCN layer with GPR-GNN does not introduce significant innovation. - I strongly encourage the authors to discuss these concerns in detail. - Lack of baseline comparisons - The paper should include comparisons with [1,2,3]. Additionally, several adversarial training-based robust GNN methods, such as [4], should be considered as baselines. - Lack of qualitative analysis - Providing qualitative results demonstrating how the proposed method enhances performance would be beneficial. --- ### References [1] Empowering Graph Representation Learning with Test-Time Graph Transformation [2] How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision [3] Self-Guided Robust Graph Structure Refinement [4] Adversarial Graph Contrastive Learning with Information Regularization Other Comments Or Suggestions: See Other Strengths And Weaknesses. Questions For Authors: See Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive comments. We address your concerns (C) as follows: --- > **C1.** The core idea closely resembles test-time graph transformation [1]. We understand the concerns regarding resemblence with TTGT[1] and wish to highlight four key differences: 1. Dedicated Parameter Training TTGT leverages embeddings from a pretrained classifier and **avoids additional training**. It performs test-time transformation by optimizing a surrogate loss tied to the classifier via Projected Gradient Descent-an approach that closely resembles PGD-style attacks. In contrast, **our method introduces a dedicated purification module trained on the training graph**, which **performs purification via learned inference, independently of the classifier**. 2. Purification Steps Our framework performs **multi-step, continuous purification**, enabling gradual refinement and better resilience to severe perturbations. TTGT, by comparison, uses a **single-step discrete projection**. Our iterative purification process differs not only from TTGT but also from other purification approaches. 3. Theoretical Focus TTGT minimizes a surrogate loss approximating class-conditional entropy H(Z|Y). Its guarantee (Theorem 2 in [1]) assumes that class-conditional means ($c_k$) approximate the true distribution—**an assumption that breaks under adversarial attacks**, which distort feature distributions. Hence, **TTGT is more suitable for mild domain shifts, as reflected in the nature of follow-up works that cite it**. 4. Purification Budget **TTGT uses a fixed edge-flip budget** like attacks, limiting its adaptability. In contrast, **our method learns adaptive, continuous multi-step updates without a predefined budget**, improving robustness across varying attack strengths. --- > **C2.** GPR‑GAE lacks technical novelty, Using GAE as a noisy edge predictor is well‑established [2,3]. GPR-GAE introduces both **architectural and conceptual novelties** over GPRGNN. While GPRGNN uses a single set of learned propagation coefficients for supervised classification, GPR-GAE employs multiple distinct GPR filters, each with its own propagation weights, concatenated to capture **diverse multi-scale neighborhood patterns**. We also exclude self-loops to better highlight meaningful structural signals. These design choices explicitly aim to enhance the model’s structural capability to distinguish clean from adversarial structures, as empirically validated in Figure G.4. Unlike prior GAE-based defenses such as [3, 5], which rely on additional heuristic pre-processing (e.g., feature similarity) before applying GAE based denoising like [2], GPR-GAE is **fully data-driven** with its enhance structural capabilities, learning to purify adaptively without relying on such assumptions. Moreover, while most existing self-supervised defense methods like [3, 5] focus on the poisoning setting, our work addresses the underexplored area of self-supervised defense in the evasion setting with different challenges—applying a targeted self-supervised framework to this context and introducing an additional point of novelty. Finally, although GPR-GAE utilizes similar loss as the conventional GAEs, its core focus lies in **learning purification directions in a continuous space** via edge re-weighting, effectively leveraging the train graph to enable more precise and generalizable purification on the test stage attacks. --- > **C3.** Comparison with [1,2,3] and other adversarial training. We acknowledge the relevance of [1,2,3] and include them in our comparison below. However, some's core assumptions differ from ours, which limits direct comparability. [3] builds on SuperGAT [2] with the rest of the methodology tailored for the poisoning problem setting. Since our work focuses on the evasion setting, we represent both [2,3] as $SuperGAT_{MX}$ [2] for comparison. Also, we evaluate $TTGT_{GCN}$ [1] using a 25% purification budget, matching the perturbation level of the non-adaptive PRBCD attack used in the following table, to reflect its best-case performance in our setup. |Model|Cora|Citeseer| |-|-|-| |$GCN$ (For comparison)|46.9±1.5|40.0±2.3| |$TTGT_{GCN}$ [1]|49.6±2.0|44.2±2.1| |$SuperGAT_{MX}$ [2,3]|54.6±4.6|46.4±1.7| |$GPR\text{-}GAE_{GCN}$|75.1±2.7|68.7±3.1| While [1,2,3] offer valuable contributions, their focus on different problem settings limits their robustness under our evaluation scenario. Please see our response to Reviewer "ShLW" (C2) for concerns on the adversarial training baselines. --- > **C4.** Qualitative results would be beneficial. Figure D.2 provides qualitative insights by visualizing the learned GPR coefficients, which exhibit diverse, dataset-specific propagation patterns. This highlights GPR-GAE's capabilities of adaptive purification, leading to improved performance across various datasets. --- [5] Reliable Representations Make A Stronger Defender: Unsupervised Structure Refinement for Robust GNN --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ efforts in addressing the concerns. While Comments C3 and C4 have been resolved, I believe the overall novelty of this work remains limited, even after considering the authors’ rebuttal. The core idea of the purify-then-train approach appears to stem from the concept of test-time transformation. Although the authors emphasize certain differences between their method and test-time transformation, these distinctions seem incremental, primarily involving the addition of technical details to an existing idea. Similarly, the novelty of GPR-GAE also appears limited. While the authors note some distinctions from prior work, these seem to amount to minor modifications built upon established approaches. Given these concerns, I will maintain my initial score. --- Reply to Comment 1.1.1: Comment: We appreciate your thoughtful response, and would like to further clarify on your still remaining concerns regarding the novelty of our work. > The core idea of the purify-then-train approach appears to stem from the concept of test-time transformation. We believe you may have meant **"purify-then-infer"** rather than **"purify-then-train"**, as the latter typically refers to purification defenses targeting poisoning attacks, which is not the focus of our work. We would also like to stress that our approach **does not stem from the concept of test-time transformation**. In fact, the notion of **"purify-then-infer"** defines the adversarial purification setting in the context of evasion attacks—that is, purifying a perturbed graph before applying a trained classifier at inference time. All adversarial purification baselines we compare against follow this same setting. As such, our work is grounded directly in this established purification paradigm, as reflected in our title: *"Self-supervised Adversarial Purification for Graph Neural Networks."* What distinguishes our approach is not the use of a purification step, but the **self-supervised learning framework** we introduce within that paradigm. Specifically, our method is designed to **decouple the learning of conflicting objectives**—classification and purification—by separating the purification model entirely from the classifier, both in terms of training and operation. This design allows the purifier to learn meaningful purification directions from the training graph itself, without relying on classifier feedback or shared representations. In contrast, test-time transformation: 1. Does **not** include a separate purification module, 2. Does **not** learn from or leverage the training graph, and 3. Directly **uses classifier embeddings** to guide transformation. Given these core differences, we respectfully but firmly disagree that our distinctions are incremental—either in comparison to test-time transformation or to existing adversarial purification methods. Our goal and core idea has been to move toward a more principled and independent framework for learning robust purification strategies and not the general "purify-then-infer" concept. We hope this clarification helps communicate that more clearly. --- >While the authors note some distinctions from prior work, these seem to amount to minor modifications built upon established approaches. We appreciate the perspective and understand that our contributions may appear as minor modifications built upon established approaches. However, we would like to clarify that while some components may seem incremental at a glance, they are in fact carefully motivated and targeted design choices aimed at addressing key limitations in existing methods. Specifically: - **The concatenated GPR filter design in GPR-GAE** enables the model to capture diverse multi-scale structural patterns, going beyond traditional autoencoders that rely on a single neighborhood aggregation for better differentiation between clean and adversarial regions. - **The multi-step purification process in continuous space** avoids abrupt, one-shot changes and allows for adaptive, iterative refinement of graph structure. This leads to significantly improved recovery with more precise purification, especially under severe perturbations. - **Our perturbed graph sampling strategy** introduces a more realistic and continuous training environment by combining edge injection, masking, and reweighting. This helps the purifier learn robust purification directions applicable across a broader spectrum of adversarial patterns. While not all components might seem individually novel, we believe the overall framework provides a meaningful step toward bridging the gap between traditional graph autoencoding approaches and the specific challenges of adversarial robustness. Our design choices are purposefully aligned to make purification more effective, adaptable, and broadly applicable in adversarial settings. --- We hope this clarification provides a clearer understanding of our motivations and contributions, and would be grateful if you could reconsider the evaluation of our work in light of these points. Thank you once again for your thoughtful and constructive feedback.
Summary: This study introduces a self-supervised adversarial purification framework to enhance the robustness of GNNs against attacks. Unlike traditional methods that merge accuracy and robustness in a single classifier, their approach (GPR-GAE) employs a dedicated purifier to cleanse input data prior to classification. Experimental results show that GPR-GAE achieves promising results. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes, no obvious errors are found in the theoretical part. Relation To Broader Scientific Literature: (1) graph neural networks (2) robustness (3) generalization Essential References Not Discussed: [1] How does heterophily impact the robustness of graph neural networks? theoretical connections and practical implications [2] Graph neural networks with diverse spectral filtering [3] Node-oriented spectral filtering for graph neural networks [4] Node-wise filtering in graph neural networks: A mixture of experts approach [5] Node-wise localization of graph neural networks Other Strengths And Weaknesses: **Strengths:** 1. The research questions addressed in the paper and the authors' designs are both well-motivated. 2. The experimental analysis, enhanced by visualization, is thorough and comprehensive. 3. The paper is well-written and easy to follow. **Weaknesses:** 1. The authors should further elaborate on the connections to existing work on GNNs with spectral filtering (a.k.a. spectral GNNs). 2. While the authors employ a node-wise filtering approach in their method, they fail to discuss similar works extensively. It is recommended that they consider and engage with relevant literature, such as [2, 3, 4, 5], and select appropriate methods to strengthen their baseline comparisons in the experimental section. 3. This work primarily focuses on the robustness of GNNs, employing spectral filtering as a solution. However, the use of the GPR filter (adaptive filter) is closely tied to the issue of graph heterophily. The authors are encouraged to review the work cited as [1] and provide a more in-depth discussion within this context to enhance the impact of their work in the field. 4. While a complexity analysis is included, it is suggested that the authors also provide empirical results. [1] How does heterophily impact the robustness of graph neural networks? theoretical connections and practical implications [2] Graph neural networks with diverse spectral filtering [3] Node-oriented spectral filtering for graph neural networks [4] Node-wise filtering in graph neural networks: A mixture of experts approach [5] Node-wise localization of graph neural networks Other Comments Or Suggestions: See the weakness part in "Other Strengths And Weaknesses". Questions For Authors: See the weakness part in "Other Strengths And Weaknesses". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive comments regarding our work. We would like to address your concerns (C) with the following responses: --- > **C1.** The authors should further elaborate on the connections to existing work on GNNs with spectral filtering (a.k.a. spectral GNNs). We thank the reviewer for the suggestion. Among spectral GNNs with learnable coefficients—such as ChebNet [6], BernNet [7], and GPRGNN—we build on GPRGNN due to its expressiveness and flexibility. Unlike ChebNet and BernNet, which use fixed polynomial bases, GPRGNN leverages learnable coefficients over a monomial basis, enabling unconstrained adaptation to diverse spectral responses. We extend this by employing multiple GPR filters in parallel, each with independently learned coefficients, and concatenate their outputs. This design captures a wider range of spectral behaviors, enhancing generalization under structural perturbations. Additionally, we omit self-loops by using the normalized adjacency $\tilde{A}_{ns} = D^{-1/2} A D^{-1/2}$, which preserves spectral properties. Since self-loops suppress high-frequency components, excluding them helps retain fine-grained variations crucial for tasks sensitive to local structure. As shown in Figure F.3, this improves purification under adversarial conditions. |Filters|Cora|Citeseer| |-|-|-| |Chebnet[6]|68.3±2.8| 63.8±2.9 | |Bernet[7]|67.6±3.1| 62.9±3.0 | |GPR (Ours)|69.7±2.9|64.1±2.4| Under adaptive PRBCD attack ($\epsilon = 0.25$), our GPR-based filters show higher robustness than ChebNet and BernNet, confirming the benefit of its flexible, unconstrained design. --- > **C2.** The paper adopts a node-wise filtering strategy but lacks sufficient discussion of related node-wise spectral filtering methods (e.g., [2–5]) We thank the reviewer for this observation. While our method shares some perspective with node-wise filtering—particularly in leveraging multiple filters to capture diverse structural signals—we emphasize that GPR-GAE takes a fundamentally different approach by using shared global filters instead of assigning unique filters per node. Recent works [2–5] propose node-wise filtering, adapting filter parameters locally based on each node’s structural context. While expressive, this approach is highly vulnerable to adversarial settings: localized attacks that perturb neighborhoods can distort the context for node-specific filter selection, leading to unreliable or misleading outcomes. In contrast, GPR-GAE employs multiple global GPR filters applied uniformly across the graph. These filters are trained to generalize over a wide range of structural patterns, offering inherent robustness to localized noise. Concatenating these globally learned filters yields a rich spectrum of responses without requiring per-node adaptation, avoiding amplification of adversarial perturbations. --- > **C3.** Discussion related to [1]. We appreciate the reviewer’s insightful suggestion. The theoretical insights in [1] align well with our training strategy. First, [1] observes that structural attacks shift homophilic graphs toward heterophily, emphasizing the value of heterophil-aware robustness. Our self-supervised training strategy injects a large ratio of negative edges—unlike prior GAEs—exposing the model to heterophilous patterns even on homophilic graphs, and further discussed in Figure D.2. Second, [1] advocates separating ego- and neighbor-embeddings. As discussed in Section 4.1, our encoding concatenates representation $H_{\theta_k}$ from distinct GPR filters, with $H_{\theta_0}$ explicitly set to $H^{(0)}$—the initial node embedding—preserving ego-information throughout purification. This design enhances robustness by preserving self-information from potentially corrupted neighborhoods. Under adaptive PRBCD attack ($\epsilon = 0.25$), removing self-information from the concatenation results in a performance drop of $2.4$% on Cora and $1.2$% on Citeseer, validating its contribution to robustness. --- > **C4.** While a complexity analysis is included, it is suggested that the authors also provide empirical results. | Model | Train(per epoch) | Inference($ \epsilon=0.25 $) | |-|-|-| | GOOD-AT | 2744.9ms | 16.2ms | | GPR-GAE | 65.6ms | 87.2ms | The table compares GPR-GAE with the recent purification baseline, GOOD-AT, on Cora, reporting average training time per epoch and inference time under PRBCD attack ($\epsilon = 0.25$) using GCN as the classifier. GPR-GAE trains much faster (65.6 ms vs. 2744.9 ms). While its inference time (87.2 ms) is higher, it remains practical. GOOD-AT’s cost stems from adversarial sample generation. For reference, inference with standalone GCN takes 8.5 ms, showing GPR-GAE adds minimal overhead for substantial robustness gains. --- [6] Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering [7] BernNet: Learning Arbitrary Graph Spectral Filters via Bernstein Approximation
null
null
null
null
null
null
VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models
Accept (oral)
Summary: The paper proposes a joint appearance-motion learning framework for video generation. The authors are motivated by the key observation that the common pixel-based training objective is invariant to temporal perturbations. Therefore, they propose to equip the model with an explicit motion learning objective via a joint representation learning framework. In addition, the authors propose an inner-guidance mechanism that steers the prediction towards more coherent motion. The proposed method is extensively evaluated on VideoJAM-Bench and shows advantageous performance especially in motion coherence and physics following. ## update after rebuttal The authors have addressed all of my concerns with extensive experimental results. Therefore, I am keeping my score of Accept. Claims And Evidence: The paper's claims are well supported by extensive empirical evidence. In particular, the experiment in Section 3 is insightful and effectively justifies the motivation of the paper. Methods And Evaluation Criteria: The proposed method is suitable for video generation because of its ability to enhance motion coherence. The evaluation criteria are extensive, including human evaluation and automatic metrics on VideoJAM-Bench and Movie Gen benchmark, but there is room for improvement: * **W1**: It would be better to evaluate on the original VBench dataset, as this would allow comparison with more open-source baselines on more general prompts. Theoretical Claims: The paper makes no theoretical claims. Experimental Designs Or Analyses: The experimental designs are quite comprehensive, but could be improved by considering scalability: * **W2**: The most counterintuitive result is that VideoJAM-30B is inferior to VideoJAM-4B in terms of quality in human evaluation and all automatic metrics, according to Tables 1 and 2. This casts doubt on the scalability of the proposed method, since a well-known bitter lesson is that the introduction of human prior knowledge can reduce scalability. It would be better for the authors to explain the performance drop and provide additional results to prove scalability. Supplementary Material: The supplementary material contains detailed experimental settings and additional experimental results, which enhance the overall completeness of the paper. Relation To Broader Scientific Literature: The paper advances video generation [Hong'22, Kondratyuk'23, Bar-Tal'24, Brooks'24] by learning an appearance-motion joint representation. Its improved physics capability contributes to world models [Ha'18, Brooks'24]. Essential References Not Discussed: Several related works on using motion representation for video generation are missing [Tulyakov'18, Shen'24, Jin'24]. --- [1] Tulyakov, et al. MoCoGAN: Decomposing Motion and Content for Video Generation. CVPR 2018. [2] Shen, et al. Decouple Content and Motion for Conditional Image-to-Video Generation. AAAI 2024. [3] Jin, et al. Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization. ICML 2024. Other Strengths And Weaknesses: Strengths: - The idea of appearance-motion joint learning in video generation is very intuitive, as the model would be more effective at ensuring motion consistency. - The comprehensive experimental analyses provide valuable insights into the original DiT and the new proposed model's behavior. - The paper is well formatted and clearly written, making it easy to follow. Weaknesses: * **W3**: Reduced flexibility. Since the optical flow can only be computed within a single scene, the proposed framework cannot effectively handle scene cuts. In this sense, the appearance-motion joint learning framework sacrifices flexibility on multi-scene video clips for better single-scene consistency. * **W4**: Inference overhead due to inner-guidance. According to Equation (8), each sampling step requires three evaluations, which imposes 1.5x inference cost compared to standard CFG. It would be better to discuss this and make comparisons under the same NFE budget rather than the same number of sampling steps. Other Comments Or Suggestions: Given the improved physics generation capability, it would be interesting to quantitatively evaluate it on recent physics-focused benchmarks such as PhyGenBench [Meng'24], Physics-IQ Benchmark [Motamed'25], WorldModelBench [Li'25]. --- [1] Meng, et al. Towards World Simulator: Crafting Physical Commonsense-Based Benchmark for Video Generation. 2024. [2] Motamed, et al. Do Generative Video Models Understand Physical Principles? 2025. [3] Li, et al. WorldModelBench: Judging Video Generation Models As World Models. 2025. Questions For Authors: * **Q1**: What is the computational cost of extracting optical flow for 3M fine-tuning samples? Is it possible to use coarser but more efficient motion representations such as motion vector? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the comprehensive feedback and the useful points for discussion. Please find below our response. __VBench prompts:__ Both VBench and Movie Gen are common benchmarks for general video evaluation. The VBench prompts are not used in our work since out of the 11 dimensions for evaluation, _6 purposefully contain prompts with no motion at all_ (e.g., “object class” contains objects: “a person”, “a bicycle”, “temporal flickering” explicitly demands no motion, “a laptop, frozen in time”). Therefore, similar to recent leading models (Step-Video-T2V, Veo2), we employ Movie Gen, which evaluates all aspects of the generation (similar to VBench) while also incorporating motion (for more details, see our response to reviewer RHDu). __VideoJAM scalability:__ We appreciate the opportunity to clarify. While it is true that (only) the automatic metrics favor the 4B model, please note: 1. It is well-documented that automatic metrics often fail to align with human preferences [1,2,3]. For instance, Movie Gen dedicates a section (3.5.3) to metrics, highlighting that "across all tasks, we find that _existing automated metrics struggle to provide reliable results_”. Automatic metrics tend to prefer static motion since it is more “smooth” (as noted in the VBench paper, "a completely static video can score well in the aforementioned temporal quality dimensions"). Indeed, the 4B model scores better than the 30B model in the automated metrics, despite the 30B model being overwhelmingly superior (see human evaluation below). This anomaly can be attributed to the 4B model's tendency to produce videos with reduced motion (DiT-4B scores only 38.3 in its dynamic degree), which aligns with the biases of automated metrics discussed above. |**Model**|**Text Faith.**|**Quality**|**Motion**| |:------------:|:----------:|:-------:|:------:| |VideoJAM-4B|21.9%|21.1%|18.7%| |**VideoJAM-30B**|**78.1%**|**78.9%**|**81.3%**| [1] Polyak et al., Movie Gen. [2] Bartal et al., Lumiere. [3] Girdhar et al., Emu Video. 2. The [official VBench leaderboard](https://tinyurl.com/33tu3we5) shows similar inconsistencies. Models ranked highly by the [HuggingFace (HF) human leaderboard](https://tinyurl.com/28ea397z) are ranked significantly lower by VBench. For example, Kling is the leading baseline by both our human evaluation (Tab. 2) and HF. However, it is ranked 21st in the VBench leaderboard, below even much smaller models like CogVid-5B and Wan2.1-1.3B, both ranked much lower in the human leaderboard. 3. To further show robustness and scalability, we enclose a study in the same setting as Tab. 2, conducted on DiT-30B at a 512x512 resolution. VideoJAM maintains its significant improvement with a _71.1% lead in motion_, which is less than 2% below the improvement for the lower resolution. |**Model**|**Text Faith.**|**Quality**|**Motion**| |:------------:|:----------:|:-------:|:------:| |DiT-30B (512)|30.5%|26.6%|28.9%| | **+VideoJAM** |**69.5%**|**73.4%**|**71.1%**| __Additional references:__ Thank you for the useful suggestions, the related works section will be revised to include them. __Scene cuts:__ We hypothesize that it is still possible to use VideoJAM by applying RAFT on each scene separately and stitching the scenes with a “white” (no motion) frame. __Inner-Guidance latency:__ Inner-Guidance is applied to only 50% of the steps and performed in parallel using a batch size of 3 (L.726-731). This results in a 1.15x slowdown compared to standard CFG, which is a significantly lower slowdown than other multi-guidance methods (e.g., Instruct Pix2Pix incurs a 1.25x slowdown as it operates on all steps). __Physics benchmark:__ Thank you for this suggestion. We arbitrarily used the first benchmark suggested, and conducted an evaluation in the same setting as Tab. 2. The results below indicate a clear advantage to VideoJAM (62.7% preference in motion). Note that it is expected for the improvement in this benchmark to be somewhat decreased since optical flow is not an explicit physics representation, therefore it does not solve all physics incoherences (L. 412-418). As we mention in the conclusions, incorporating physics priors in future works is a fascinating direction. Samples from the comparison are provided in [this link](https://tinyurl.com/yhtmvx8p). We observe that these prompts are quite different from the ones employed by other benchmarks, and do not describe typical natural scenes. |**Model**|**Text Faith.**|**Quality**|**Motion**| |:------------:|:----------:|:-------:|:------:| |DiT-30B|48.2%|49.2%|37.3%| |+VideoJAM|**51.8%**|**50.8%**|**62.7%**| __Optical flow calculation on data:__ The calculation takes 13 hours in total (~1-2 seconds per sample with a batch size of 128). VideoJAM is a generic approach that can theoretically work with any representation, as long as there is a reliable way to encode it (e.g., RGB representation or a specialized VAE). We are happy to address any remaining questions. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response, which addresses most of my concerns. I have one remaining question related to W1 (in line with with Reviewer RHDu): How does VideoJAM perform on VBench dimensions that are not directly related to motion? I can imagine a potential tradeoff between preserving static appearance details and modeling dynamic motion. Have you observed any other interesting findings? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the continued discussion. We are delighted that our previous response answered the concerns raised in the original review. Following the request by reviewer RHDu, please find below a table with the additional 10 evaluation axes of VBench for completeness of evaluation (the other 6 are already reported in the Appendix). The experiments were conducted using the official VBench prompts, in the setting described in the VBench paper, on the more powerful 30B variant of our model. We observe that in almost all axes (apart from the “scene” axis which describes static scenes such as “alley”, “amusement park”), the VideoJAM variant is either superior or equivalent to the base model. In axes that measure meaningful motion such as “human action”, VideoJAM is, as can be expected, superior to the base model. Note that as pointed out in our original response, some of the VBench temporal axes measure _frozen motion_ (e.g., “temporal flickering” is evaluated on purposefully static scenes). In these cases, we do not expect VideoJAM to improve on the performance of the base model. The results below are aligned with the intuition outlined in the paper, demonstrating that motion and appearance can be complementary. When the motion is coherent, it reduces body and object deformations and temporal artifacts, which in turn helps improve the perceived visual quality in videos (see SM website for many such examples). Additionally, we observe that VideoJAM does not harm the model’s ability to produce static scenes (see examples of such static videos from the VBench "frozen" evaluation prompts in [this link](https://tinyurl.com/r9y7h4jw)), which correspond to “white” (no motion) flow predictions. Following the reviews, we will add the table to the Appendix in the final version of the manuscript. | **Model** | **temporal flickering** | **object class** | **multiple objects** | **human action** | **color** | **spatial relationship** | **scene** | **appearance style** | **temporal style** | **overall consistency** | | :----------: | :--------------------: | :--------------: | :------------------: | :--------------: | :------: | :----------------------: | :-------: | :-------------------: | :------------------: | :-----------------------: | | DiT-30B | **99.66%** | 89.07% | 62.99% | 97.00% | 81.23% | 72.01% | **52.07%** | 22.05% | 24.19% | 27.53% | | VideoJAM | **99.66%** | **90.65%** | **70.00%** | **99.00%** | **91.09%** | **73.32%** | 49.74% | **23.25%** | **24.42%** | **27.57%** |
Summary: Despite recent advancements, generative video models still exhibit significant limitations in temporal coherence, especially when modeling real-world dynamic interactions and physics. The authors identify that this issue arises fundamentally from the traditional pixel-based reconstruction objectives, which prioritize appearance quality at the expense of coherent motion. To address this critical limitation, this paper proposes VideoJAM, a framework designed to explicitly integrate motion priors into video generation. Specifically, VideoJAM introduces a unified latent representation to jointly capture appearance and motion information by predicting both signals simultaneously through the addition of two linear layers during training. Additionally, during inference, the authors propose Inner-Guidance, a novel mechanism that dynamically adjusts the sampling process based on the model's own motion predictions. Extensive experiments demonstrate that VideoJAM substantially improves motion coherence, consistently outperforming several state-of-the-art open-source and proprietary models. Claims And Evidence: The authors' core method (VideoJAM) is evaluated comprehensively on the proposed VideoJAM-bench which is based on Movie Gen benchmark, utilizing clearly defined evaluation standards such as VBench metrics and the two-alternative forced choice (2AFC) human evaluation protocol. Experimental results convincingly demonstrate VideoJAM’s substantial improvement in motion coherence. However, the authors' claim regarding the adaptability and generality of VideoJAM to arbitrary video generation models currently lacks sufficient experimental validation. Specifically, all provided experiments exclusively employ DiT-based architectures (DiT-4B and DiT-30B), and no additional evidence is presented to demonstrate the generalization capability of VideoJAM to other prevalent architectures. Further validation using diverse, non-DiT video generation models is necessary to robustly substantiate this claim. Methods And Evaluation Criteria: The proposed VideoJAM framework is clearly motivated, conceptually straightforward, and innovative, effectively addressing the key issue of temporal coherence in generative video models. Specifically, the introduction of a joint appearance-motion representation and the Inner-Guidance mechanism provides explicit and effective guidance toward coherent motion predictions, which is theoretically sound and well justified. Additionally, the authors propose the VideoJAM-bench, specifically constructed based on the Movie Gen benchmark, focusing explicitly on challenging motion categories, including basic motion, complex motion, rotational motion, and physics-based interactions. The benchmark is well-designed, targeted, and capable of effectively evaluating motion coherence. Furthermore, the evaluation criteria, which include both automatic metrics (VBench) and human assessments (2AFC), are thorough and rigorous, comprehensively capturing the performance of the proposed approach. However, despite the authors emphasizing the widespread use of the datasets without any modification, the relatively low resolution and short duration of the videos utilized may limit the ability to demonstrate the framework's practical effectiveness under high-resolution, longer-duration scenarios. Therefore, it would be beneficial to validate the method further on higher-resolution and longer-duration videos to fully establish its practical applicability and generalizability in real-world settings. Theoretical Claims: The theoretical analysis presented in VideoJAM primarily revolves around two aspects: (1) the theoretical justification and motivation behind the proposed Joint Appearance-Motion Representation, and (2) the theoretical derivation and rationale of the Inner-Guidance mechanism. Specifically, the authors clearly articulate the theoretical reasoning behind jointly modeling appearance and motion representations, addressing the fundamental limitations of traditional pixel-based objectives. Furthermore, they rigorously discuss the theoretical distinctions between their proposed Inner-Guidance and existing methods such as Classifier-Free Guidance and InstructPix2Pix guidance, emphasizing the critical dependence between motion prediction signals and model weights, a scenario not supported by existing assumptions of independence between conditions. However, the selection of specific Inner-Guidance weights appears empirical, lacking theoretical justification or sensitivity analysis. A theoretical explanation or sensitivity analysis supporting the effectiveness of these particular weight values would further strengthen the theoretical rigor and clarity of the presented claims. Experimental Designs Or Analyses: The authors utilize not only the established Movie Gen benchmark but also introduce the VideoJAM-bench specifically designed to evaluate motion coherence, enhancing the relevance of their experimental setup. The selection of advanced baselines, including open-source models such as CogVideo5B and Mochi, and proprietary models such as Sora and Kling, ensures fairness and reliability of comparisons. The ablation studies are thorough and clearly demonstrate the contribution of text guidance, motion guidance, and the Inner-Guidance mechanism individually. However, the experiments are limited by relatively low video resolutions and short durations (5 seconds), which may not fully reflect the model’s effectiveness in realistic, high-resolution, and longer-duration scenarios. Additionally, although a clear comparison standard (2AFC) was provided, the paper lacks detailed information on the selection criteria of human raters and the explicit evaluation protocols, potentially introducing subjectivity and bias into the experimental outcomes. Supplementary Material: I thoroughly reviewed all supplementary materials provided by the authors, paying particular attention to the sections on Implementation Details, VideoJAM-bench, and the result videos. The supplementary materials clearly elaborate on experimental specifics not fully detailed in the main text, including precise model parameter settings, training procedures, and experimental designs. Additionally, the comprehensive description of VideoJAM-bench and the prompt selection process significantly clarified my evaluation of both qualitative video results and quantitative experimental outcomes. Relation To Broader Scientific Literature: The authors clearly identify the prevalent issue of motion incoherence in existing diffusion-based video generation models and thoroughly analyze its root cause—namely, traditional pixel-based reconstruction objectives being insensitive to temporal dynamics. Inspired by Composable Diffusion Models (Liu et al., 2022) and Classifier-Free Guidance (Ho & Salimans, 2022), they propose a novel conditional sampling mechanism termed Inner-Guidance. Unlike previous methods, VideoJAM explicitly emphasizes dependencies among conditioning signals and between conditioning signals and model weights, rather than assuming their independence. By dynamically leveraging motion predictions generated by the model itself during sampling, VideoJAM substantially enhances the coherence of generated motion. Furthermore, compared to recent motion representation approaches (e.g., Shi et al., 2024; Wang et al., 2024), which typically treat motion as an external conditioning input, VideoJAM explicitly integrates motion prediction into the training objective, systematically improving both motion generation quality and video coherence. Essential References Not Discussed: No Other Strengths And Weaknesses: Although VideoJAM demonstrates superior motion coherence compared to existing methods, it still struggles with fine-grained motion details, such as hand movements, in complex scenarios (e.g., "On a rainy rooftop, a pair of hip-hop dancers lock and pop in perfect sync, bringing energy and rhythm to the stage," and "A panda breakdancing in a neon-lit urban alley, graffiti art providing a colorful backdrop"). This limitation suggests room for future improvement in handling intricate motion coherence. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the comprehensive feedback and the interesting points for discussion. Please find below our response to the points raised in the review. __VideoJAM adaptability:__ We appreciate the feedback. While all concepts of our work can be easily generalized to any backbone, we acknowledge that this work focuses on the de-facto leading architecture, DiT. Following the review, the writing will be revised and refined to reflect this and avoid confusion. __Higher resolution and longer videos:__ The duration of our generated videos is limited by that of the base model we employ, which is 5 seconds (128 frames at 24 fps). However, to showcase the scalability of VideoJAM, we repeat the experiment from Tab. 2 on VideoJAM-30B with a higher resolution of 512x512. The results of the human study are enclosed below. As can be observed, VideoJAM is, once again, superior to the base model in all aspects, and very significantly improves motion coherence, further establishing the scalability of our method. | **Model** | **Text Faithfulness** | **Visual Quality** | **Motion** | |:------------:|:----------:|:-------:|:------:| | DiT-30B (512) | 30.5% | 26.6% | 28.9% | | **+ VideoJAM** | **69.5%** | **73.4%** | **71.1%** | __Inner-Guidance sensitivity test:__ Thank you for this suggestion. In accordance with common practice in diffusion papers, we selected the best scale empirically. Following the review, we enclose a qualitative sensitivity test with different Inner-Guidance scales in the [following link](https://tinyurl.com/3fx38map) (best viewed in full screen). All results are extracted in the same setting described in the paper. As can be observed, removing Inner-Guidance causes a noticeable degradation to the motion coherence (e.g., the legs of the man, the helicopter is flying backward). Importantly, Inner-Guidance demonstrates robust results across all reasonable scales (3,7) and does not display unusual sensitivities. When significantly increasing the scale (50), the results are out of distribution and cause a degradation in the video quality and motion, which is to be expected with any other guidance signal as well (e.g., text with CFG). __Human evaluation protocol:__ Thank you for raising this point, we appreciate the opportunity to clarify this aspect of our work. The human evaluators selected to participate in our study have extensive experience with evaluating generative visual models. As a filtering criterion, each evaluator has performed _at least 1000 evaluations_ before, where _at least 90% of those evaluations have been approved by third-party examiners_. The two videos in each comparison are randomly shuffled to ensure an unbiased comparison, and all videos are generated without watermarks to avoid identification. Additionally, note that the rating of the baselines by our evaluators (Tab. 2) is very much in correlation with the [public HuggingFace video leaderboard based on human evaluations](https://tinyurl.com/28ea397z), where Kling 1.5 is the leading baseline, with Sora being the second strongest baseline. We enclose below our instructions for the evaluators for transparency. > Hello! We need your help to read a caption, and then watch two generated videos. After watching the videos, we want you to answer a few questions about them: >Text alignment: Which video better matches the caption? >Quality: Aesthetically, which video is better? >Motion: Which video has more coherent and physically plausible motion? Do note, it is OK if the quality is less impressive as long as the motion looks better. We are happy to address any remaining questions.
Summary: This paper presents VideoJAM, a framework that improves motion coherence in generative video models by learning a joint appearance-motion representation. It introduces two key components: predicting both pixels and motion during training, and Inner-Guidance for coherent motion during inference. VideoJAM outperforms competitive models in motion coherence and visual quality, highlighting the importance of integrating appearance and motion for better video generation. Claims And Evidence: Claims: VideoJAM improves motion coherence in video generation models by learning joint appearance-motion representations, using dynamic guidance to steer generation toward coherent motion. Evidence: The paper provides comparisons showing VideoJAM outperforms competitors in motion coherence and visual quality. However, the ablation study raises doubts about the necessity of the Inner-Guidance component, as it has little effect on results. Methods And Evaluation Criteria: Methods: VideoJAM introduces two key components: predicting both pixels and motion during training, and using Inner-Guidance during inference to enhance motion coherence. Evaluation Criteria: The model’s performance is evaluated based on motion coherence, visual quality, and quantitative metrics (Human Eval, Auto Metrics), but the Human Eval results for the final method are not provided. Theoretical Claims: The paper suggests that integrating motion and appearance representations during training leads to more coherent motion in generated videos, which is a significant improvement over previous methods. Experimental Designs Or Analyses: The paper includes ablation studies to assess the impact of different components like Inner-Guidance and optical flow. However, the removal of Inner-Guidance shows minimal effect, and the correlation between Human Eval and Auto Metrics is unclear. Supplementary Material: No Relation To Broader Scientific Literature: The paper builds on existing video generation models but emphasizes the novel integration of motion and appearance. However, it could engage more with related work on motion coherence and the role of dynamic guidance in generative models. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: 1. The idea of using the model’s own evolving motion prediction as a dynamic guidance signal to steer generation toward coherent motion is interesting and highlights the difference from previous work. 2. The presentation is clear and easy to understand. Weakness: 1. The introduction of motion representation can significantly improve the effectiveness of motion, and this conclusion is predictable. However, I noticed in the ablation study in Table 3 that removing Inner-Guidance has little effect on the results, while removing optical flow has the largest impact. This raises concerns about the necessity of Inner-Guidance. Additionally, I would like the authors to provide information on the impact of Inner-Guidance on inference time. 2. I found that there is no correlation between the Human Eval and Auto. Metrics in Tables 1, 2, and 3, and the authors did not provide the Human Eval results for the final method. Therefore, I remain skeptical about the true effectiveness of the work. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and points for discussion. Please find below our response. __Human evaluation:__ Thank you for your feedback. There appears to be a misunderstanding in the review. Importantly, _VideoJAM appears in all human evaluations_. As highlighted in all table captions, _the human evaluation scores indicate the percentage of votes that favor VideoJAM_, where each result uses the Two-Alternative Forced Choice (2AFC) protocol (Tabs. 1-3, L. 371-377). For example, a motion score of 85.9% for CogVid-5B indicates that evaluators prefer the motion by VideoJAM over CogVid-5B in 85.9% of votes. Since all rows are inherently comparative to VideoJAM, the final row ("VideoJAM vs. VideoJAM") contains minus signs (-). To further improve clarity, we will add visual graphs of the comparisons in the Appendix, and replace the minus signs with upper arrows. As noted in the paper (L. 400-403, L. 413-419), human raters consistently and significantly prefer VideoJAM in terms of motion over all baselines. __Impact of Inner-Guidance:__ Inner-Guidance is a highly beneficial component of our method. As shown in Tab. 3, human evaluators significantly prefer the results with Inner-Guidance across all categories (68.9% vs. 31.1% for text faithfulness, 64.4% vs. 35.6% for quality, and 66.2% vs. 33.8% for motion). Also, note that _removing the optical flow prediction inherently removes Inner-Guidance_, as Inner-Guidance depends on the motion prediction. Thus, eliminating optical flow is an ablation that includes _both Inner-Guidance, and the flow prediction_ and, as expected, results in a greater performance drop. To further demonstrate the benefits of Inner-Guidance, we enclose a [qualitative ablation test](https://tinyurl.com/5c3jdab7) (best viewed in full screen). The results, generated in the same setting as in the paper, illustrate that removing Inner-Guidance often degrades motion coherence and introduces artifacts. __Inner-Guidance latency:__ As noted in the implementation details (L.726-731), Inner-Guidance is applied to only 50% of the steps (following Sec. 3) and performed in parallel using a batch size of 3. This results in a 1.15x slowdown compared to the standard CFG, which is a significantly lower slowdown than other multi-guidance methods (e.g., Instruct Pix2Pix incurs a 1.25x slowdown since it operates on all steps). __Additional related works:__ We appreciate the suggestion. The related works section will be revised to include all references proposed by the reviewers, ensuring a more comprehensive review of existing literature. __Automatic metrics vs. human evaluation:__ Thank you for raising this point. We appreciate the opportunity to clarify. 1. It is well-documented in literature that automatic metrics often fail to align with human preferences [1,2,3]. For instance, the Movie Gen paper dedicates an entire section (3.5.3) to metrics, and highlights that "across all tasks, we find that _existing automated metrics struggle to provide reliable results_”. Specifically, temporal metrics tend to prefer either static motion since it is more “smooth” (as noted in the VBench paper "a completely static video can score well in the aforementioned temporal quality dimensions."), or completely incoherent videos that contain a lot of movement (Appendix, L. 767-768). Appearance metrics inherently prefer static videos, since they are typically based on _image models_, for which video frames that contain motion are out of distribution. All of the above underscores the inherent limitations of automated metrics in capturing the dynamic and nuanced nature of video content, and the necessity for a human-based evaluation. [1] Polyak et al., Movie Gen. [2] Bartal et al., Lumiere. [3] Girdhar et al., Emu Video. 2. The [official VBench leaderboard](https://tinyurl.com/33tu3we5) further highlights these inconsistencies. Models ranked highly by the [HuggingFace (HF) human leaderboard](https://tinyurl.com/28ea397z) are ranked significantly lower by VBench. For example, Kling 1.5 is the leading baseline by both our human evaluation (Tab. 2) and HF. However, it is ranked 21st in the VBench leaderboard, below even much smaller models like CogVid-5B and Wan2.1-1.3B, both ranked much lower in the HF human leaderboard. We enclose both automated metrics and human evaluations for completeness and believe continued research on reliable automatic metrics is essential for video generation assessment. __Supplementary materials (SM):__ We noticed that the reviewer indicated “no” to viewing the SM. While we understand that reviewing SM is optional, we believe that the actual video results are very important to fully appreciate the improvement enabled by VideoJAM. We kindly ask the reviewer to consider viewing the results on our SM website. We are happy to address any remaining questions, and respectfully ask the reviewer to reconsider the score if our response has addressed all the questions.
Summary: The authors present Video-JAM (Joint Appearance-Motion representation), at an aim to capture real-world motion, dynamics, and physics, which existing video generative models struggle with handling. In particular, the authors discover that the current video model training objective biases models towards fidelity, at the expense of motion coherence or temporal consistency. To this end, the authors propose to train video generative models to predict the motion of the video - represented by optical flow masks - as well, thereby training the models on joint appearance (pixel) - motion (optical flow) representations. This addition requires addition of a few layers (to handle the extra input and output), and training together **with** the optical flow supervision can only happen at the final finetuning stage. Now that the video generative model is equipped with the ability to recover the motion, the authors propose the Inner Guidance to steer the model towards coherent motion. VideoJAM achieves state-of-the-art performance in motion coherence, while also improving the visual quality of the generated videos. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Claim 1: The current objective used for training video generative models is biased towards pixel fidelity, and neglects motion coherence. - Substantiated by Figure 3 and Figure 8. Claim 2. VIdeoJAM also improves visual quality of generated videos, not only the motion coherence. - Validated in the quantitative/qualitative results in the experiment section. Methods And Evaluation Criteria: Yes. The authors use both human evaluation and the automatic metrics from VBench, the currently standard suite for evaluating video generative capabilities. Theoretical Claims: Yes - section 4.3 sufficiently shows that since $d_t$ depends on the prompt and model weights (i.e., not independent with the model weights), the conventional CFG formulation cannot be used to guide motion coherence. The proposed inner guidance in Eq. 8 is derived with substantial motivation and correctness. Experimental Designs Or Analyses: The experimental designs and analyses seem sound and valid. In particular, it was nice to see that human evaluation together with the automatic metrics from VBench, since the automatic metrics may not directly reflect the actual human preferences. Supplementary Material: Yes, I reviewed the attached videos, and it was visibly evident that VideoJAM is effective for improving the motion coherence and temporal consistency. Relation To Broader Scientific Literature: Similar endeavors were carried out in the text-to-3D / multi-view domain, where Wonder3D (Long et al., 2024) attempts to predict the normal map together with the pixel values during the generation process. Such work, including VideoJAM, hints that appropriate usage of additional modalities may help improve the generative results while not harming the generative quality. Essential References Not Discussed: While not an essential reference, adding Wonder3D as a reference may help the readers to understand that the idea of engaging additional modalities is a sound way of improving generative quality. Otherwise, I could not find any essential references which was not discussed. Other Strengths And Weaknesses: Strengths: S1. Strong motivation and effective results. S2. Simple and straightforward methodology without complications, yielding effective results. Weaknesses: W1. Not a strong weakness, but the paper does not include ALL the evaluation metrics provided in VBench. More results are included in Tables 4 and 5, but it would help to include the full results from VBench for the authors to fully understand the capabilities and shortcomings of VideoJAM. Other Comments Or Suggestions: Please refer to W1 mentioned above. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the comprehensive review of our work and the insightful suggestions. Please find below our response to the points raised in the review. __Related work on text-to-3D:__ Thank you for bringing this work to our attention. We will revise the related works section to reference this paper in the final version of the manuscript. __VBench metrics:__ Thank you for this point for discussion. VBench has two different contributions: a benchmark and a set of metrics. The benchmark is not used in our work since a significant part of the prompts there _intentionally describe static scenes_. Most categories of the benchmark (e.g., architecture, food, scenery, plants) are dominated by prompts that do not elicit any motion (e.g., “an apartment building with balcony”, “video of a indoor green plant” (grammar mistake in the original prompt), “landscape agriculture farm tractor”). Even the categories that entail motion (e.g., humans and animals) tend to focus on prompts that do not require meaningful temporal understanding (e.g., “people sitting on a sofa”, “a man with a muertos face painting”, "a black dog wearing halloween costume”). Additionally, out of the 11 dimensions for evaluation, _6 purposefully contain prompts with no motion at all_ (e.g., the class “object class” simply contains a list of objects such as “a person”, “a bicycle”, the class “color” contains prompts that assign colors to objects such as “a red bicycle”, “a green bicycle”). Therefore, similar to concurrent leading video models (Step-Video-T2V, Veo2) we opt to employ the Movie Gen benchmark as a general quality evaluator since it is carefully curated to evaluate all aspects of the generation (similar to VBench) while also explicitly incorporating motion instructions in all the prompts (see Section 3.5 of the [Movie Gen paper](https://arxiv.org/pdf/2410.13720) for details on the Movie Gen benchmark). The set of metrics proposed by VBench is divided into two: those that are applied by VBench to external benchmarks (see [official implementation](https://tinyurl.com/yc3xhyk5)) and a set that includes additional metrics that apply only to the VBench prompts. An example of such a metric is the VBench metric to estimate temporal flickering which requires _frozen videos_, thus the [prompts explicitly encourage no motion at all](https://tinyurl.com/yyd2z9b9). Naturally, we used _all metrics that are supported for external benchmarks_ to evaluate the performance of VideoJAM in the setting that it is intended for. This allows us to benefit from the generic and meaningful prompts from Movie Gen with the disentangled axes of evaluation from VBench. Notably, both benchmarks are not suitable for measuring motion coherence, as both tend to focus more on appearance and less on temporal understanding (L. 330-357). Thus, we construct an additional benchmark (VideoJAM-bench) to estimate temporal coherence and employ Movie Gen for a general assessment of the video quality. We are happy to address any remaining questions by the reviewer. --- Rebuttal Comment 1.1: Comment: I thank the authors for their thoughtful answer to my comments. **VBench metrics:** I appreciate the clarification provided, as well as your development of VideoJAM-bench to better assess temporal coherence. I recognize that your primary focus is on advancing motion generation, and I understand that many existing evaluation metrics, particularly those in VBench, do not adequately capture motion-related aspects. However, while motion generation is central to your contribution, video generation as a field encompasses more than just motion. Even though your work introduces the Movie Gen benchmark to evaluate human perceptual quality, many recent video generation methods include comparisons using the full VBench benchmark (including newer iterations such as VBench++ or VBench2.0). Reporting these additional metrics would provide a more comprehensive evaluation and allow the community to better understand the strengths and limitations of your method across all facets of video generation. I fully appreciate the novelty of your approach and the significant motion improvements demonstrated in the supplementary qualitative results. Including the full VBench results, even for categories not directly related to motion, would enhance the overall context of your work and help readers situate your contributions within the broader landscape of video generation research. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the continued discussion. We are delighted that the reviewer appreciates the novelty and the significance of our approach and we are grateful for the opportunity to further demonstrate the robustness of VideoJAM. Please find below a table with the additional 10 evaluation axes of VBench for completeness of evaluation (the other 6 are already reported in the Appendix). The experiments were conducted using the official VBench prompts, in the setting described in the VBench paper, on the more powerful 30B variant of our model. We observe that in almost all axes (apart from the “scene” axis which describes static scenes such as “alley”, “amusement park”), the VideoJAM variant is either superior or equivalent to the base model. In axes that measure meaningful motion such as “human action”, VideoJAM is, as can be expected, superior to the base model. Note that as pointed out in our original response, _some of the VBench temporal axes measure frozen motion_ (e.g., “temporal flickering” is evaluated on purposefully static scenes). In these cases, we do not expect VideoJAM to improve on the performance of the base model. These results further substantiate the robustness of our method and its ability to improve motion coherence while maintaining, and even improving, the visual quality of the model. Following the reviews, we will add this table to the Appendix in the final version of the manuscript. | **Model** | **temporal flickering** | **object class** | **multiple objects** | **human action** | **color** | **spatial relationship** | **scene** | **appearance style** | **temporal style** | **overall consistency** | | :----------: | :--------------------: | :--------------: | :------------------: | :--------------: | :------: | :----------------------: | :-------: | :-------------------: | :------------------: | :-----------------------: | | DiT-30B | **99.66%** | 89.07% | 62.99% | 97.00% | 81.23% | 72.01% | **52.07%** | 22.05% | 24.19% | 27.53% | | VideoJAM | **99.66%** | **90.65%** | **70.00%** | **99.00%** | **91.09%** | **73.32%** | 49.74% | **23.25%** | **24.42%** | **27.57%** |
null
null
null
null
null
null
LieRE: Lie Rotational Positional Encodings
Accept (poster)
Summary: The authors introduce a type of positional embedding which extends the RoPE embeddings by introducing learnable rotation matrices. ## update after rebuttal I thank the authors for their thorough response. In light of this, I will increase my score to weak accept. Claims And Evidence: The authors present reasonable although somewhat limited experimental validation, by training ViT models on CIFAR-10, Imagenet-1k and UCF101. The gap between their model and RoPE mixed (from which it is an incremental modification) is very small. Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: Yes, I took a lot at the FLOPs tables and the attention maps Relation To Broader Scientific Literature: The litterature review is thorough, and I appreciate that the authors are honest in acknowledging similitudes with existing methods ("Note that the only difference between LieRE and RoPE-Mixed is that the latter constrains the rotations to be block-diagonal with block size two"). Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - Paper is well-written and easy to follow - Method seems to marginally outperform existing methods at little compute increase Weaknesses: - Novelty: as acknowledged by the authors, this method is an incremental modification of the existing RoPE-Mixed embeddings where instead of having block matrices of block size 2, the block size becomes a hyperparameter. - Effectiveness: I am not convinced of the benefits of this method. First, as shown in figure 5 and table 2, the increase in performance is rather marginal. Second, as shown in Figure 8, the effect of this hyperparameter on performance is rather unpredictable, which does not make this method particularly practical. Although the sections 5.2 and 5.6 of the paper are a bit more convincing, I remain lukewarm about the effectiveness of the method. - Unpolished: the paper seems to have been rushed nearing the deadline and feels unpolished. Consider section 5.2: the second paragraph ends with an unfinished sentence ("The only exception to this is absolute position encodings, where we have variants trained on 800,000 and") and the third paragraph contains an undefined reference. Additionally, figures 1 and 2 are not very clear in my opinion. Other Comments Or Suggestions: None Questions For Authors: None Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer yH2U, Thank you for your thoughtful review and detailed feedback. We understand your concerns about the incremental nature and effectiveness of our work, and would like to address these directly: **On novelty and the primary contribution of the work**: We split up our contributions broadly into (1) analysis and (2) method. **Analysis**: The closest work to ours is RoPE-Mixed. We build on their work with both a substantially larger performance delta and more extensive analysis. We aim to include coverage of technical details not present in the most similar prior work such as the sensitivity to weights initialization (see the table in the response to reviewer gAmX) and different methods of defining extrapolated token positions. In addition to an extended quantitative comparison, we aim to provide qualitative insights to how the position encodings affect inference and training dynamics with the attention maps and patch shuffling experiments. Finally, we extend experimental coverage to study the effect of the dimensionality of the input, a first even for existing position encodings. Concretely, to the extent of our knowledge, this is also the first work that benchmarks RoPE-Mixed for 3D data. **Method**: We believe strongly that the presentation of a technical work should be as easy to understand. This motivates us to keep the connection to existing methods simple, including being direct about the settings where the methods are identical and the additional complexity of LieRE is worth the additional complexity. Novel technical machinery is required in order to utilize high dimensional rotation matrices that are dense or have block size more than two. In particular, we introduce the method of encoding the positions in a basis of skew symmetric matrices which is then passed through the matrix exponential to obtain dense high-dimensional rotations. This use of Lie group theory is novel, and the key ingredient in enabling the use of dense high-dimensional rotations with allows LieRE to go beyond RoPE-Mixed with statistically significant performance improvements. | Dataset | p-value (Rope-Mixed vs. LieRE_8) | |---------|----------------------------------| | CIFAR-100 | 1.3e-05 | | ImageNet-1k | 6.3e-03 | | UCF101 | 7.1e-04 | | 384 x 384 (Resolution Invariance) | 4.0e-04 | Your observation about block size unpredictability helped us recognize the need to better articulate our findings: LieRE_8 consistently demonstrates optimal performance across both 2D and 3D experiments (Appendix B.9. Basis parameters, Table 8). We will incorporate clear guidelines for practitioners based on our systematic analysis. **Writing Improvements**: We will address the writing issues you identified: complete the unfinished sentence in section 5.2, fix the undefined reference and clarify Figures 1-2 with improved visual explanations (Figure 5: https://postimg.cc/D8w2tcKp). We believe these revisions will better communicate both the theoretical contributions and practical benefits of our work. We again thank Reviewer yH2U for their review of our paper. We hope that the above responses adequately address all concerns. **=== LieRE vs. RoPE 1D proof continued ===** $$ \\begin{align*} x_t K R_t^T R_{t'} Q x_{t'} &= x_t K S^T \\exp(t \\Lambda) S S^T \\exp(t' \Lambda) S Q x_{t'} \\\\ &= x_t K S^T \\exp(t \\Lambda)^T \\exp(t' \\Lambda) S Q x_{t'} \\\\ &= x_t K S^T \\exp(t \\Lambda)^T \\exp(t' \\Lambda) S Q x_{t'} \\end{align*} $$ We let $K’=K S^T$ and $Q’= S Q $, since these matrices are all learnable we can fold the S matrix into parameters of the key and query linear layers for the given head, allowing us to simplify the above expression. $$ x_t K’ \\exp(t \\Lambda)^T \\exp(t’ \\Lambda) Q’ x_{t’} $$ Now we use the fact that each block is skew symmetric. In the case of two dimensions, $$\\exp\\left(\\begin{pmatrix} 0 & \\lambda \\\\ -\\lambda & 0 \\end{pmatrix}\\right) = \\begin{pmatrix} \\cos(\\lambda) & \\sin(\\lambda) \\\\ -\\sin(\\lambda) & \\cos(\\lambda) \\end{pmatrix} $$ If we let $R_\\lambda$ denote a block diagonal rotation matrices with 2D rotations of angles $\\lambda_0, \\ldots, \\lambda_n$, we can rewrite the above expression in a more familiar form. $$ x_t K’ R_{t \\Lambda} ^T R_{t’ \\Lambda} Q’ x_{t’} $$ This is exactly the formulation of the original RoPE position encoding. This also makes more clear how LieRE is different from RoPE-Mixed in the high dimensional setting. The above proof depends on the fact that we can decompose every rotation into a matrix of the form $S^T \\Lambda S$ with S not dependent on the position, allowing us to fold the orthogonal S matrices into the key and query matrices. This decomposition with constant S is guaranteed because the inputs to the matrix exponential differ by only a scalar factor. This is no longer true once we switch to more than a one dimensional basis of skew symmetric matrices.
Summary: LieRE extends the popular RoPE by replacing its block-diagonal 2D rotation matrices with learned, dense, high-dimensional rotation matrices derived from Lie group theory. The authors show that LieRE addresses key limitations of RoPE, particularly for multi-dimensional data like images and videos. Specifically, while RoPE was originally designed for one-dimensional sequence processing (like text), LieRE generalizes position encoding to higher dimensions through the use of Lie groups. The method involves learning skew-symmetric basis matrices and computing rotation matrices for n-dimensional positions, which are then applied to keys and queries in the attention mechanism. The paper evaluates LieRE against other positional encoding methods on several tasks, the results show that LieRE outperforms competing methods across these tasks, with particular advantages in data efficiency, resolution invariance, and when processing limited training data. Claims And Evidence: The claims made in the paper are well-supported by empirical evidence, but i just want to point out that the assertion that LieRE provides a "unified approach" for handling different dimensionalities is supported by experiments on 2d and 3d data, but testing on additional dimensionalities (like 1d sequences, higher-dimensional data) would make this claim more robust. Methods And Evaluation Criteria: The methodology and evaluation criteria seem well designed, even though the paper focuses on classification tasks, which may not fully showcase the advantages of better positional encodings. Tasks requiring fine-grained spatial understanding (like segmentation or object detection) would provide a more comprehensive evaluation. Theoretical Claims: There's no formal proof that LieRE can handle "exponentially many relative positions for n-dimensional data" better than alternatives, though the empirical results are supportive. Experimental Designs Or Analyses: The experimental design is sound. Supplementary Material: I read the appendix. Relation To Broader Scientific Literature: The paper builds directly upon RoPE and its variants (RoPE-Mixed, VisionLlama), clearly identifying limitations and proposing extensions. The application of Lie group theory to positional encodings is a new connection between abstract algebra and deep learning architectures. The paper doesn't extensively discuss connections to other approaches for handling multi-dimensional data in transformers, such as axial attention or perceiver architectures. Overall, while the paper makes a significant contribution to positional encoding research. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Other weaknesses: - The authors note that "RoPE-Mixed is sensitive to the initialization of weights," suggesting that LieRE might share this sensitivity, but they don't thoroughly explore how different initialization strategies might affect performance. Other strenghts: - The paper provides a solid mathematical foundation based on Lie group theory, extending positional encodings in beyond the current state-of-the-art methods. - LieRE shows very good ability to generalize to image resolutions not seen during training, outperforming other methods especially at higher resolutions. - The patch shuffling experiments offer valuable insights into how much the model utilizes positional information, with LieRE showing the most significant performance drop when positional information is disrupted. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Dear Reviewer gAmX, Thank you for your thorough and supportive review. We particularly appreciate your recognition of our mathematical foundations and empirical results. We have built upon your feedback to further improve the paper. In addition to the changes below, we have also expanded the paper to better relate our work to other architectural work such as perceivers and axial attention. **On sensitivity to initialization**: We originally discovered the sensitivity as we were iterating to reproduce the results of RoPE-Mixed. RoPE-Mixed seems to be particularly sensitive to the scale of the initial weights. LieRE is about half as sensitive. We have added a section to the appendix that explores this in greater detail, but in short, RoPE-Mixed shows up to a 2% drop in CIFAR100 performance while LieRE shows a 1% drop. Outside of the new initialization sensitivity experiment, all experiments for both LieRE and RoPE-Mixed are performed with the setting where RoPE mixed performs best. The key table is presented below. | Metric | LieRE_8 (2π vs 1 init) | RoPE-Mixed (2π vs 1 init) | |---------|------------------------|---------------------------| | Z-statistic | -1.81 | -3.85 | | P-value | 0.070 | 0.00012 | | Difference between means | -0.0118 | -0.0255 | | 95% Confidence Interval | [-0.0246, 0.0010] | [-0.0385, -0.0125] | **1D evaluation tasks**: We have added a section characterizing the use of LieRE for 1D tasks. In short, LieRE is equivalent in representational capacity to RoPE with learnable frequencies for one-dimensional tasks. Please see the response to reviewer “udrb” for the proof of this fact that will be included in the paper. Our empirical experiment confirms this equivalence. Higher-dimensional evaluation tasks: Thank you for the suggestion! To our knowledge, LieRE is the first approach to extend rotational position encodings to 3D data, and we are eager to explore scaling to 4D. Given the dataset size required to train transformers on high-dimensional data, such extensions may currently rely more on synthetic data or alternative ways of structuring dimensional information. We see this as an exciting next step and appreciate your insights on pushing this further. **Related Work**: Thank you for pointing out the connection to Axial attention and Perceivers. We have added a section of related work focused on related works towards compute-efficient scaling beyond sequence data. Axial Attention [19] reduces computational complexity by applying attention along specific axes (e.g., rows and columns in images), enabling transformers to scale efficiently with high-dimensional data. Perceiver [20] utilizes latent tokens to compress high-dimensional inputs into a smaller set, improving scalability as input size and dimensionality grow. These methods address the inefficiencies of traditional transformers when applied to high-dimensional data. Additionally, techniques like Swin [21] and Vmamba [22] optimize compute for visual data. Swin Transformer introduces a hierarchical approach with shifted windows, limiting attention to local regions to reduce complexity while capturing global context. Vmamba, on the other hand, proposes a visual state space model that represents images as a collection of spatial states, allowing attention to be applied efficiently across large-scale visual inputs by exploiting spatial locality and reducing redundant computation. It would be great to compound these methods with LieRE, as in this work we use a plain encoder transformer. Thank you for constructive feedback which we were able to use to further improve the paper. We are excited to share this work with the community. **=== LieRE vs. RoPE 1D proof ===** Though focused on higher dimensional inputs LieRE remains compatible with 1D tasks. It turns out that in the 1D setting, LieRE has identical representational capacity to RoPE. This is not the case for higher dimensional inputs for reasons that will be clearer later in the exposition. We include a cut-down version of the proof we propose to add to the paper below. Recall that in the 1D setting positions are scalars. The LieRE rotation is $R=\\exp(tA)$ for some learnable skew-symmetric matrix A. Recall that skew-symmetric matrices can be written in the form $S^T \\Lambda S$ where $S$ and is orthogonal and $$ \\Lambda = \\begin{pmatrix} 0 & \\lambda_0 & & & \\\\ -\\lambda_0 & 0 & & & \\\\ & & 0 & \\lambda_1 & \\\\ & & -\\lambda_1 & 0 & \\\\ & & & & \\ddots \\end{pmatrix} $$ We can then use an identity of the matrix exponential to break down the LieRE rotation matrix. $R = exp(tS^T \\Lambda S) = S^T \\exp(t \\Lambda ) S$. For two tokens in positions $t, t’$ we denote the embeddings for a specific attention head as $x_t,x_{t’}$. If $K, Q$ denote the corresponding key and query linear transformation matrices we can write the attention inner product with LieRE explicitly. **... continued at the end of next rebuttal ...** --- Rebuttal Comment 1.1: Comment: Thank you, I think that with these clarifications your paper will be even more solid.
Summary: The authors mainly proposed a new positional encoding method called Lie, to replace the previous wildly used RoPE. It is used to improve the spatial relationship representation, especially in 2D and 3D images. Extensive experiments are conducted on classification tasks, and with the proposed PE, the accuracy values are all improved by a significant margin. ## update after rebuttal I carefully read the authors' rebuttal, and thanks so much for the responses. The authors also agree that it currently lacks evidences on image generation tasks, and some of the experiments are limited in design and scope. There issues are not fully resolved actually, and the authors did not clarify how to address them in the final version. The authors mainly used these experiment to "inspire" other future works, which I feel not very informative. However, the generalization design of RoPE to Lie group itself is interesting. So better AC can make the final decision to balance these factors. Claims And Evidence: The evidences are mostly well supporting the claims. The proposed LiePE greatly improves the transformer-based classification model by a large margin. Figure 9 also shows great generalization capability of the positional encoding to higher resolution. Experiments also show that the compute increment is not significant. Methods And Evaluation Criteria: The idea and the theory behind the proposal is elegant and interesting. Using the exponential of skew-symmetric matrix to generalize RoPE is intuitive and smart, making the position encoding fully learnable and more expressive. Experiments on classification is a simple yet effective say of validating the idea, and the baseline comparison is clear and fair. Given now RoPE is more verified in image generation task, it will be better to show the potential of LiePE on image generation task. Theoretical Claims: Not applied to this paper. Experimental Designs Or Analyses: The main concerns for the experimental analysis is, the newly proposed PE is only used for image classification task and lower-res images. It is not sufficient to prove the effectiveness and expressiveness of the new PE for very long-context modeling. The image understanding tasks on the synthetic data is also very limited to prove its effectiveness. The patch shuffle experiments are interesting, but with random patch shuffle, it's not that meaningful to compare the dropping rate when the accuracy from different approaches are similarly low. Supplementary Material: I reviewed the tables and figures, and all the figure lines are very hard to parse. Also there are not marginal difference between RoPE and the proposed method. Relation To Broader Scientific Literature: The proposed PE is supposed to be a very general approach and a plug-and-play components for all transformer-based models. The idea has its merits and it has great potential to be generalized to high-resolution image generation task. However, the experiments in this paper cannot well support the claims and might not bring significant impacts in the literature. Essential References Not Discussed: Not found. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: The overall writing is not that easy to follow. Missing figure numbers in line 307, and the formatting needs improvement. Figure5. is too hard to parse. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer udrb, We appreciate your recognition of our work's theoretical merits and experimental contributions. We have carefully considered your feedback and would like to address each point. **Long Context for 1D**: LieRE is primarily focused on inputs with dimensionality greater than one. In fact, there is the [non-obvious] property that, in the 1D setting, LieRE has equivalent representational ability to RoPE with learnable frequencies. This is not the case in higher dimensions. We include a proof of this fact at the end of the response, and will include it in the paper as it has been requested by other readers. **Long Context for 2D and 3D**: This implies that the natural equivalent of long-context modeling is evaluating the model at resolutions higher than it was trained at. This is the focus of section 5.6. (Multi-Resolution Classification). We evaluate with up to four times as many inference tokens than during pretraining and finetuning (Figure 9). Higher Resolution image generation: We agree that high resolution image generation is an exciting application to benchmark position encodings. The long-context image classification examples were motivated in part to create apples to apples experiments when compared to prior works [1, 2], as these are more focused on image classification. We are very supportive of future work in that application. This application is especially exciting in light of recent autoregressive image generation techniques such as VAR [3] that scale more predictably, enabling smaller scale experiments. It would be particularly interesting to see how position encodings could influence global consistency. **Patch shuffling**: We hope to clarify that the intent of these experiments is to provide insight to the mechanics of why the methods perform differently rather than to show one method is better than the other. Patch shuffling allows us to see whether the model is actually using positional information. We agree with you that it does not identify which method is best at the ultimate applications. For that we depend on the other experiments. The focus of the experiment is to help the reader build an intuition of what is going on with various position encodings. Each method compared is distinct in both how and what kind of positional information it is capable of encoding. LieRE has the ability to use both relative and absolute positional information, and, ideally, we would like to rule out there being a simpler method that could perform just as well. The drop in performance when shuffling patches is one limited datapoint consistent with that hypothesis. **Synthetic task**: We wholeheartedly agree with the assessment that the synthetic task has limitations. Still, we can see clear failures of basic spatial reasoning even in frontier models trained with resources well beyond the reach of any academic lab. Reproducing similar patterns in simple experiments greatly improves the accessibility of studying these issuesIn fact, understanding these failures is an increasingly growing area of research [6,7]. We believe better position encodings are one of the ingredients that will be necessary to resolve these limitations and hope the data point in our paper is suggestive of that. It is important to note that many VLMs are still trained with absolute position encodings that perform substantially worse than the relative position encoding we benchmarked (Table 1). While paper coherence and practical considerations prevent us from fusing this work with a new research project focused on resolving these limitations on spatial reasoning, we hope this experiment serves as a datapoint used to inform future work on spatial reasoning. Writing and figures: Thank you for the feedback on the writing and figures. We have used it to improve the readability of the paper and make changes such as switching to more readable high contrast color schemes for the figures. (such as Figure 5: https://postimg.cc/D8w2tcKp) Thank you for the thoughtful review, and we hope we addressed your remaining concerns.
Summary: The paper introduces a positional embedding encoding based on Lie Groups. The idea of the paper is to parameterize the positional embeddings using skew symmetric matrices. The authors show the benefit of the proposed method in terms of generalization, data efficiency and compute needed. Overall, the idea is novel and interesting. The authors have empirical evidence that validates the quality of their work. However, the paper quality needs to be improved, both in terms of presentation as well as writing. Claims And Evidence: The paper claims are well supported by the experiments. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense. Theoretical Claims: There are not theoretical claims. Experimental Designs Or Analyses: Yes. Supplementary Material: Everything. Relation To Broader Scientific Literature: The paper expands the equivariant work in terms of positional embeddings for Lie groups. Essential References Not Discussed: The paper overlooks several works in equivariant transformers and neural networks: - Equivariant transformer networks, Tai et al - Equivariant Neural Functional Networks for Transformers, Tran et al - Equiformer: Equivariant graph attention transformer for 3d atomistic graphs, Liao et al - Polar transformer networks, Esteves et al - Efficient equivariant network, He et al, - Se (3)-transformers: 3d roto-translation equivariant attention networks, Fuchs et al - Learning so (3) equivariant representations with spherical cnns, Esteves et al Also a lot of work in equivariance: - Harmonic networks: Deep translation and rotation equivariance, Worral et al Even in lie groups ML, there are plenty of references missing: - Lie group algebra convolutional filters, Kumar et al. - Deep learning symmetries and their Lie groups, algebras, and subalgebras from first principles, Forestano, et al - Differential geometry and lie groups, Gallier et al - Reparameterizing distributions on lie groups, Falorsi et al Other Strengths And Weaknesses: Comments: C1 - Use a definition for the property of equations (1) and (2). For example, for eq 2 use Comm(U,V). C2 - Page 4, line 216 - Algorithm should be capitalized. All the subsequent calls to the word Algorithm should also be capitalized. C3 - Page 5, a lot of blank space. I believe it would improve the quality of presentation to fix this. C4 - The text under Figure 1 is confusing and needs a better structure. Some quantities are define but never used. C5 - In the experiments, a piece of the text was removed, making the sentence incomplete: “We train the models on 800,000 examples and observe that they generally converge after the first 400,000. The only exception to this is absolute position encodings, where we have variants trained on 800,000 and” There is also a missing reference: “Please refer to the appendix for attention map examples, Figure 13 and Figure ??.” C6 - In page 15, the text covers the page number. Other Comments Or Suggestions: See before. Questions For Authors: The authors do not include any equivariant transformer architecture and only compare with 2 other embedding works. Why are the authors not comparing themselves with any architecture of the ones mentioned before. I believe that the authors need to compare their results with some equivariant / quasi equivariant work of the ones mentioned before. Ethical Review Concerns: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful and thorough review and writing feedback which has helped strengthen the paper. We have addressed the typos and writing style in the revision based on your comments. **Equivariant work comparison**: We are excited about the equivariant line of work! We believe it is key to learning sample efficient representations for domains with extensive symmetry. We will include a new section of the related work that relates LieRE to the suggested works on equivariance and other work on lie groups in ML. As a complimentary architectural change, equivariance transformers do not directly compete with position encodings such as LieRE. Position encodings such as LieRE, RoPE-Mixed and absolute position encodings are minimally invasive modifications to the base transformer architecture, making them compatible with many different architectures. In the case of LieRE, this enables things like finetuning existing LLM weights to become multimodal models capable of handling high dimensional data. Combining them would be an exciting area of research. One of the observations present in both this work and the most closely related work [1] was that, sometimes, translation-invariance is helpful, but in other cases having access to the reference coordinate system can help performance. This is supported by the fact that adding absolute position encodings to RoPE-Mixed, and LieRE performs best in the regime where the attention patterns are not necessarily constrained to be translation invariant (recall that the attention patterns are translation invariant only when the block size is constrained to two). Fortunately, this is compatible with many of the designs in the equivariant line of work. The natural question is how do we let models benefit from both the sample efficiency of equivariance and the fact that sometimes the absolute coordinate system does contain useful information. Combining these methods in the right way is an exciting future direction but has enough technical complexity that it is hard to incorporate into the current paper without losing focus and diluting individual learnings around positional encodings. The contributions in this paper are easiest to understand when comparing to earlier works that modify the same aspect of the transformer architecture. We hope you agree that the revisions to the paper provide a stronger connection to equivariant line of work and an extended related work accelerates future research in the area. **Equivariance Related work**:A related branch of work encoding problem structure focuses on equivariance. We say that a model T is equivariant with respect to $f$ if T(f(x)) = g(T(x))$ for some $g$ [8]. Where with relative position encoding we often want to be able to encode translation invariance, equivariance provides a more general framework. Equivariance has been applied to improve performance on problems with a wide array of structures, ranging from rotation-invariance [10,13,14], 3D reference frame-invariance [9,12] and many others. The subset of these works that focus on generating equivariant token embeddings for transformers can be combined directly with LieRE or another rotation-based position encoding. **Lie Groups in Machine Learning**:Lie groups have also had extensive use in machine learning. The range of works is diverse, ranging from algebraic signal processing [15], automated discovery of symmetries [16] to state estimation [18]. Furthermore [17] provides a friendly introduction to differential geometry and lie groups that may be of interest to the reader. We thank the reviewer for highlighting this connection and helping us improve the paper. References are replaced with links to respect character limits. [1] https://arxiv.org/abs/2403.13298 [2] https://arxiv.org/abs/2403.00522 [3] https://arxiv.org/abs/2404.02905 [4] https://arxiv.org/abs/2212.09748 [5] https://arxiv.org/abs/2112.10752 [6] https://arxiv.org/abs/2411.04097 [7] https://arxiv.org/abs/2406.15955 [8] https://arxiv.org/abs/1901.11399 [9] https://arxiv.org/abs/2206.11990 [10] https://arxiv.org/abs/1709.01889 [11]https://proceedings.neurips.cc/paper_files/paper/2021/file/2a79ea27c279e471f4d180b08d62b00a-Paper.pdf [12] https://arxiv.org/abs/2006.10503 [13] https://arxiv.org/abs/1711.06721 [14] https://arxiv.org/abs/1612.04642 [15] https://arxiv.org/abs/2305.04431 [16] https://arxiv.org/abs/2301.05638 [17] https://link.springer.com/book/10.1007/978-3-030-46040-2 [18] https://arxiv.org/abs/1903.02958 [19] https://arxiv.org/abs/1912.12180 [20] https://arxiv.org/abs/2103.03206 [21] https://arxiv.org/abs/2103.14030 [22] https://arxiv.org/abs/2401.10166 --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I have updated my score accordingly
null
null
null
null
null
null
Conditional Lagrangian Wasserstein Flow for Time Series Imputation
Reject
Summary: The paper proposes a time-series imputation method based on optimal transport flow matching. To improve point-estimation of the imputations, the paper suggests learning an additional denoising autoencoder, which when used during sampling reduces imputation variance. Evaluation with common point-estimation metrics shows favorable performance of the new method. Claims And Evidence: 1. Paragraph 3 in the introduction claims that "we treat the multivariate time series imputation task as a conditional optimal transport problem, whereby the random noise is the source distribution, the missing data is the target distribution, and the observed data is the conditional information." This suggests that the paper aims to learn the conditional _distribution_ of the missing data. * However, the addition of the drift term in $v_t^{\phi}$ in the sampling procedure (Algorithm 2) pushes the samples towards the conditional mean of the missing data distribution. As a result, the approach may potentially ignore the uncertainty, as is also highlighted at the end of Appendix C. * It is therefore important to highlight that the addition of this drift term, may be used to trade-off imputation uncertainty for lower-variance imputations (that may be close to the conditional mean of the missing distribution). * While singular imputation methods that aim to impute data with samples from the conditional mean (alternatively, mode or median) have important applications, it is also important to highlight the known caveats of singular imputation as compared to multiple imputation that aims to sample the distribution of missing values, see e.g. Section 2.1.2 of [1]. As such, I believe that the paper could benefit its readers by more clearly stating the imputation goals, potentially highlight caveats of singular imputation if that is the main goal, and elaborating that "reduc[ing] the sampling variances" can in fact reduce the probabilisitic method to a singular imputation method. 2. Please also provide a reference or a proof to the statement in second column of Line 169. Namely, please elaborate how equation 13 minimises the kinetic energy in equation 7. References: [1] van Buuren (2018). Flexible Imputation of Missing Data. Methods And Evaluation Criteria: The method proposed in Section 3 is sound to the best of my understanding, and provides a way to trade-off sampling the correct missing data distribution for samples that are closer to the conditional mean by using the learnt drift term $v_t^{\phi}$. However, it is important to assess this trade-off empirically using probabilistic metrics: * The current evaluations focused only on point-wise metrics, such as MAE and RMSE. * The paper should evaluate the proposed method using some probabilistic metrics, such as CRPS, MMD, Wasserstein distance, and so on [1,2,3]. I would expect that the Rao-Blackwellization ablations in Table 5 would reveal that Rao-Blackwellization reduces the performance on these probabilistic metrics, since it encourage the sampler to be close to the conditional mean of the missing values, rather than _sampling_ the true distribution of missing values. References: [1] Muzellec et al (2020). Missing Data Imputation using Optimal Transport [2] Jolicoeur-Martineau et al (2024). Generating and Imputing Tabular Data via Diffusion and Flow-based Gradient-Boosted Trees [3] van Buuren (2018). Flexible Imputation of Missing Data Theoretical Claims: The theoretical results and proofs in the Appendix seem correct, but as discussed above the potential caveats of "reducing sampling variance" (via Rao-Blackwellization) need to be discussed and evaluated. Experimental Designs Or Analyses: The evaluations in Section 4 seem ok for point-wise estimation. However, the goals in the introduction must clearly state whether the paper aims to solve singular or multiple imputation. For example, if the goal is multiple imputation, then the ablation in Table 3 does not make much sense, since CSDI inherently samples the distribution of the missing data and as such any increase in the point-wise metrics may be attributed to the inherent uncertainty of the true missing data distribution. Some of the baseline results in table 1 are copied from [1], which is ok, but should be acknowledged where appropriate! Highlighting whether there may be any differences in the evaluation settings would also be helpful. References: [1] Chen et al (2023). Provably Convergent Schrödinger Bridge with Applications to Probabilistic Time Series Imputation. Supplementary Material: I have reviewed the Appendix C about the Rao-Blackwellization, and the section seems sound. But the caveats of the approach need to be explained clearly in the text. The "resampling trick" in Appendix D is also clear and sound. Relation To Broader Scientific Literature: The paper builds heavily on CSDI [1] by using a similar model specification, architecture, and evaluation. The authors replace score matching with a combination of flow-matching and the Rao-Blackwellization term, which IMO is sufficiently novel and shows improved point-estimation performance. However, as the paper drops the probabilistic metric (CRPS) from [1], I assume that it may have deteriorated, and thus this highlighting performance drop could be useful to the readers of the paper. References: [1] Tashiro et al (2021). CSDI: Conditional Score-based Diffusion Models for Probabilistic Time Series Imputation Essential References Not Discussed: No additional references are needed, but the paper should highlight where the copied baseline results come from. E.g. some of them seem to be copied from [1], but other may be from elsewhere. References: [1] Chen et al (2023). Provably Convergent Schrödinger Bridge with Applications to Probabilistic Time Series Imputation. Other Strengths And Weaknesses: The paper is fairly well-written and presents a sufficiently novel method. The results on point-estimation are fairly good and will be of interest to some practitioners interested in missing data imputation. However, it is important to improve the rigour of the problem setting and the goals of the approach (e.g. does the paper aim to generate singular imputation, sample multiple imputation, or provide a controllable way between the two?). Other Comments Or Suggestions: Typos: * Line 105: "equatio" -> "equation" * Line 137: I believe the $\mathbb{R}$ should be $\{0, 1\}$ as the mask $M$ should be binary. * Line 200: "learnning" -> "learning" * Line 201: "introduce" -> "introduced" * Line 266: "flowing" -> "flow" * Line 913: "Neyma" -> "Neyman" Questions For Authors: No further questions, please see the questions/concerns above. To me, the most important aspect to address is the problem setting and highlighting the potential caveats of Rao-Blackwellizing the imputations. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for the insigtful comments (length limit, the response is concise). ### **Claims And Evidence** 1. > - However, the addition of the drift term... #### **Response**: The proposed theoretical framework can be used to analyze the learning process of the conditional distribution. However, in realization we find that estimating the conditional mean of the distribution can achieve better empirical performance in terms of RMSE and MAE.The Rao-Blackwellization only focus on reducing the variance of the sampler; which may disregard the uncertainty. Nevertheless, this can also be beneficial for reducing the sampling time for the base unbiased sampler. Because after the Rao-Blackwellization we can use the same number of Monte Carlo path samples to estimate the sample means more accurately. Please refer to the experimental results shown in Appendix F.2. > - ... elaborating that "reduc[ing] the sampling variances" can in fact reduce the probabilisitic method to a singular imputation method. #### **Response**: Thanks for the suggestion. The realization of the proposed method is more suitable for singular imputation. We will add following content in the main paper: “If we aim for singular imputation, i.e., estimating the conditional means of the missing data, the Rao-Blackwellization can be used to reduce the sampling variances. However, please note that the sampling diversity may decrease.” > 2. Please also provide a reference or a proof to the statement in second column of Line 169.... #### **Response**: According to the definition of the dynamical OT/Schrodinger bridge problem given by Eq (5), the infimum of which is in fact the Wasserstein distance between the initial and target distributions. And the velocity field $\mu_t$ satisfies the Fokke-Planck equation, Eq (2). The kinetic energy functional endowed by Eq (7) is minimized when the transport path is the geodesic in the Wasserstein space.Since we have the velocity field $dx/dt = \mu_t(x) = \frac{X_T-X_0}{T-0}$, and the optimal solution satisfies $\Pi_t (x) = (1-t)x + tT(x)$; where $T$ is the optimal pushforward map. Then the velocity field $\mu_t(x) = dx/dt = T(X)-X$ and its norm: $||\mu_t(x)||^2 =||T(X)-X||^2$. The corresponding kinetic energy reads $K = \int^T_0 \int_R ||T(x) -x||^2dp(x_t)dt$, where the intermediate sample $p(x_t)$ is rendered by interpolating the initial and target distributions using Eq (7). Thus, $K$ evaluates the Wasserstein distance between the initial and target distributions. According to the Cauchy-Schwarz inequality, any other path will have greater kinetic energy.We will include this proof in the appendix. Reference: ##### Neklyudov, eat al. A computational framework for solving wasserstein lagrangian flows. ICML, 2024. ### **Methods And Evaluation Criteria** > The paper should evaluate the proposed method using some probabilistic metrics... #### **Response**: Thanks. We show the performance comparison between the base sampler and the Rao-Blackwellized one on ETTh1. --- | MMD | Wasserstein| CRPS| --- | --- | --- |---- | Base| 0.13| 52.42 | 0.19| RB | 0.12| 48.70 | 0.19| This shows that reducing variances may be beneficial for the performance in our case. We will highlight that RB may degrade the model’s performance on probabilistic metrics in the paper. ### **Experimental Designs Or Analyses** >However, the goals in the introduction must clearly state whether the paper aims to solve singular or multiple imputation. #### **Response**: As we stated in Sec 2.1. The theoretical is based on the SDE formulation of the OT problem as it provides a general framework. However, in practice, we opt for the ODE sampler to achieve the best empirical results for singular imputation. We will highlight the above explanation in the paper. >Some of the baseline results in table 1 are copied from [1] #### **Response**: Thanks for the suggestion. The results will be explicitly acknowledged in the experiment part of the main text as follows: Some of results of the baselines shown in Table 1 are reported from [1]. ### **Relation To Broader Scientific Literature** #### **Response**: Indeed, reducing the variance of samples may damage the model’s probabilistic modeling performance. The reason we opt for non-probabilistic metrics to evaluate the model because the designed base sampler is based on ODE which attains better RMSEs and MAEs compared to existing methods. Moreover, CRPS may not be a suitable evaluation metric in our case. [1] and [2] suggest that CRPS could lead to an incorrect assessment of overall model performance by overlooking the model's performance on each dimension of data. ##### [1] Alireza, et al. Random noise vs. state-of-the-art probabilistic forecasting methods: A case study on CRPS-Sum discrimination ability. Applied Sciences, 2022. ##### [2] Marin, et al. Modeling temporal data as continuous functions with stochastic process diffusion. ICML, 2023. #### **Typos** Thanks. We will revise the paper. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I am mostly happy with the response and keep my original recommendation. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. We are glad that you like the reponse.
Summary: In this paper, the authors proposed a novel time-series imputation approach named `Conditional Lagrangian Wasserstein Flow` (CLWF) based on the functional optimization approaches, for example, Schr \"{o}$dinger bridge, optimal transport, flow matching, etc. At first, the authors reformulated the data imputation problem as an optimization of some potential function, which can be solved by simulating some dynamical system under the principle of least action. After that, the authors designed the time-dependent denoising autoencoder (TDAE) to estimate the gradient of potential function. Based on the abovementioned contents, the authors summarized the algorithms for TDAE training and CLWF sampling. Finally, experimental results are conducted on related datasets to demonstrate the efficacy of the proposed approach. ## update after rebuttal I have carefully read the author's reply and I think my concerns have not been addressed. This article lacks clear motivation, the understanding of the concept of "diff" is unclear, and there might be problems in the code implementation (averaging of the inference results, not calculating the mask matrix). Claims And Evidence: 1. In page one, the authors mentioned that diffusion models `are limited to slow convergence or large computational costs.` To the reviewer's knowledge, the acceleration of diffusion models has been attempted throughout previous works like DPM-Solver [1] and Diffusion Exponential Integrator Sampler [2]. The claim was made without considering these progresses. 2. In page 1, right column, `In our method, we treat the multivariate time series imputation task as a conditional optimal transport problem, whereby the random noise is the source distribution, the missing data is the target distribution, and the observed data is the conditional information.` It seems that the missing mechanism was ignored in this procedure, which indicates that the proposed approach may not be suitable for the missing not at random (MNAR) scenario, a familiar case in recommender systems. 3. In Eq. (5), the authors have not provided the boundary condition, to the reviewer's knowledge, the solving of optimal control or Schr$\"o$dinger bridge should include the boundary condition. 4. In Page 3, right column, the problem statement of the time-series imputation task has not been given in detail, for example, is $M_{i,j}\in\{0,1\}$? 5. In page 4, the authors stated that `Note that Eq. (12) can only allow us to generate time-dependent intermediate samples in the Euclidean space but not the Wasserstein space, which can lead to slow convergence as the sampling paths are not straightened.` Is it the reason that Eq. (12) does not satisfy the Fokker-Planck equation that delineates the conservation of density in Wasserstein space? Please give detailed proof. 6. In the supplementary material, the authors stated that `Therefore, the data generation task can also be interpreted as a stochastic optimal control (SOC) problem`. It seems that the theory that the data generation problem can be treated as an SOC problem is based on the reason that the SOC formulates an upper bound of the KL divergence between generated samples and original data samples, as supported by reference [3], this statement has not been clarified. --- References [1]. DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps, NeurIPS 2022 [2]. Fast Sampling of Diffusion Models with Exponential Integrator, ICLR 2023 [3]. Path Integral Sampler: a stochastic control approach for sampling, ICLR 2022 Methods And Evaluation Criteria: 1. Proposed method: Treating the data imputation task as a data generation task, which introduces the noise during the imputation procedure, may not be suitable. Specifically, reference [1] has explicitly pointed out that `Specifically, diversity is a key objective of the generation problem, which requires the generated data to vary significantly while maintaining relevance to the given context. Diffusion models being sensitive to the initial noise ($x_T$) at the generation stage helps generate diverse samples – different noise usually leads to different generated samples. Conversely, the objective of the imputation task is accuracy rather than diversity, requiring the imputed data to closely resemble the singular groundtruth values.` The proposed approach also generates missing data from noise, which seems to go against this statement. More validations need to be provided to support the proposed method. Based on this, reference [2] has proven this issue from the perspective of gradient flow, a mathematical tool based on the ODE and PDE. 2. Evaluation: Tables 1 to 5 have not conducted the paired-sample $t$-test. 3. The overall algorithm for CLWF has not been given, suppose when we have a missing dataset at hand, how can we use the CLWF *ab initio* to fill the missing data? References: [1]. Self-Supervision Improves Diffusion Models for Tabular Data Imputation, CIKM 2024 [2]. Rethinking the Diffusion Models for Missing Data Imputation: A Gradient Flow Perspective, NeurIPS 2024 Theoretical Claims: 1. Eq. (46). It seems that the optimal control policy given by (46) is the solution of some infinite optimal control problem, can we further extend this to the finite horizon problem? 2. In appendix B.6 it seems that the flow matching is based on the first order dynamical system, but the Lagrangian mechanics, which involves Hamiltonian computation, is a second order dynamic, shall we understand this from a higher order system? 3. In Section 3.4 shall we use $U_t(X_t) \propto -\log{\mathcal{N}(X_t\vert \widehat{X}_t, \sigma_p^2)}$? Experimental Designs Or Analyses: 1. Notably, the authors have proposed the optimal transportation based approach, but the baseline comparison has not included static optimal transportation approaches, for example references [1], [2]. 2. The computational time comparison in Table 10 should include those of baseline models. --- References [1]. Missing Data Imputation using Optimal Transport, ICML 2020 [2]. Transformed Distribution Matching for Missing Value Imputation, ICML 2023 Supplementary Material: Yes, I have read the full part of the supplementary material. Appendix D illustrates the resampling trick in Fig. 4 without detailed descriptions, which is a bit unclear. Step-by-step descriptions should be included for clarity. Relation To Broader Scientific Literature: See the references listed aboved. Essential References Not Discussed: See the references listed aboved. Other Strengths And Weaknesses: ### Strengths 1. The topic is related to the ICML conferences. 2. The proposed approach seems to be interesting. ### Weaknesses 1. There is no need for introducing concepts like Schrodinger bridge in the main content, since the main content has not used this concept. 2. The detailed evaluation metric computational protocols should be included in the manuscript. Other Comments Or Suggestions: Refer to the above contents. Questions For Authors: Refer to the above contents. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you very much for the insigtful comments (due to length limit, we have to keep the response concise here). ### **Claims And Evidence** > 1.To the reviewer's knowledge, the acceleration of diffusion models ... #### **Response**: Here, we mainly refer to the diffusion models for time series imputation tasks, e.g., CSDI. Indeed, DPM-Slover and Diffusion Exponential Integrator Sampler can accelerate the inference procedures for diffusion models. However, CLWF can use less model evaluation steps to achieve competitive results (please refer to the experimental results in Appendix F.1 on Page 20). We will discuss the above works in the paper. >2. It seems that the missing mechanism ... #### **Response**: The proposed method is able to impute unconditionnaly, thus it maybe able to deal with this problem to some extent. However, we will investigate this issue in the future work. >3. In Eq. (5), the authors have not provided the boundary condition, ... #### **Response**: The boundary condition is given by the definition of SDE in Eq (1) by letting the initial distribution equal $p(X_0)$ and the target distribution equal $p(X_T)$ using stochastic interpolating. >4. is $M_{i,j} \in0,1$? #### **Response**: Yes, we will define the problem more rigorously. >5. Is it the reason that Eq. (12) does not satisfy the Fokker-Planck equation that... #### **Response**: If the Eq (12) performed in Wasserstein space via projection, then it still satisfies the Fokker-Planck equation. >6. It seems that the theory that the data generation problem can be treated as an SOC... #### **Response**: The KL divergence between the sampling path measure (related to the controlled SDE) and the reference path measure (related to the uncontrolled SDE) can be minimized by solving the corresponding SOC problem according to Girsanov theorem. Please also refer to Appendix B.4. ### **Methods And Evaluation Criteria** >1. ...proposed approach also generates... #### **Response**: If we can estimate the target distribution mean unbiasedly, then a generative methodology is suitable for imputation tasks. >2. We will include this in the paper. >3. The training procedure of CLWF in described in Algorithm 1 in Sec 3.6 on Page 5 and the inference procedure of CLWF is described in Algorithm 2 in Sec 3.6 on Page 6. ### **Theoretical Claims** >1. It seems that the optimal control policy given by (46)...? #### **Response**: Eq. (46) in fact solves a finite control problem from $t_0$ to $t_T$. It is possible to extent this to the finite horizon problem if we can learn a control function that asymptotically stabilizes the system by satisfying the Lyapunov stability function. In fact, [1] proves that minimize the free energy functional for all probability densities satisfies the Lyapunov stability. ##### [1] H. Risken, The Fokker-Planck equation: Methods of solution and applications, 2nd ed., Springer-Verlag, Berlin, Heidelberg, 1989; >2. In appendix B.6 it seems that the flow matching is based on the first order dynamical system, ...? #### **Response**: The second order term is involved in the Lagrangian mechanics (the text above Appendix B.4 on Page 15). However, we adopt a sampler based on the overdamped Langevin dynamics, in which the acceleration is assumed to be negligible due to the existence of strong friction. $md^2x/dt^2 = -\lambda dx/dt - \nabla U(x) + \sqrt{2M}dt = 0$, therefore, the second order system is reduced to a first order system: $\lambda dx/dt = - \nabla U(x) + \sqrt{2M}dt$. >3. In Section 3.4 shall we use $U_t(X_t) \propto −\log ⁡N(X_t|\hat{X}_t,\sigma_p^2)$? #### **Response**: Thanks for the suggestion. This may be more rigorous. ### **Experimental Designs Or Analyses** >1. Method| ETT-h1 |0.25| ETT-h1 |0.375 |ETT-h1 |0.5 | --- | --- | --- |---- |---- | --- |---- | --- | RMSE|MAE | RMSE| MAE | RMSE| MAE| OTImputer|0.967|0.665|0.915|0.623 |0.944 |0.696| TDM| 0.960 | 0.732 | 0.976 | 0.740 | 0.997 |0.749 | CLWF|0.197| 0.128|0.263 |0.171 |0.323 |0.205| >2. Thanks for the suggestion. Please see the reponse to Reviewer cRw4. ### **Weaknesses** >1. no need for introducing concepts like Schrodinger bridge in the main content. #### **Response**: Thanks for the suggestion. The main reason we introduce the Schrodinger bridge problem (the dynamic optimal transport problem considering stochasticity) in the main text, is to link the sampling method to the SDE and SOC, which can help us provide a more generalized theoretical framework. We will move the relevant introduction to the appendix. >2.The detailed evaluation metric computational protocols should be included in the manuscript. #### **Response**: The RMSE: $RMSE = \sqrt{ \frac{1}{N} \sum_{i=1}^N (x_i-\hat{x}_i)^2}$, where $N$ is the total number of the imputation target datapoints, $x_i$ is the target data value, and .$\hat{x}_i$ is the imputation data value. #### The MAE: $MAE = \frac{1}{N} \sum_{i=1}^N |x_i-\hat{x}_i|$. We will include the computation protocols in the appendix. --- Rebuttal Comment 1.1: Comment: It appears that the author's rebuttal has not fully addressed the reviewer's concerns. The specific points of contention are listed as follows: 1. The acceleration mechanism for the DPM solver relies on the property that the OU process contains a linear drift term, allowing it to be modeled by an exponential integrator. However, the SDE presented in the manuscript does not include such a linear drift term. This suggests that the author may have misunderstood the underlying concepts. 2. If the missing-data mechanism is not explicitly considered—for instance, under a Missing Not At Random (MNAR) scenario—this can lead to incorrect results. If this assumption is integral to the proposed approach, we recommend clearly articulating it in the manuscript. 3. Please provide a detailed proof for Eq. (12) to substantiate the claim made in the proposal. 4. The claim that strong friction can negate acceleration requires clarification. If this setup indeed renders related references irrelevant, could you explain why? Otherwise, it would suggest that those references [1-2] serve no meaningful purpose in the context of this discussion. 5. The computation of MAE is given as \( MAE = \frac{1}{N} \sum_{i=1}^N |x_i - \hat{x}_i| \). However, based on `PyPOTS/pypots/nn/functional/error.py`, it seems that the mask matrix should be included in the calculation. Could you clarify this discrepancy? 6. "If we can estimate the target distribution's mean unbiasedly, then a generative methodology is suitable for imputation tasks." However, when the target distribution is bimodal—such as a Gaussian Mixture Model (GMM)—which "mean" are we referring to? --- References: [1]. Accelerated Flow for Probability Distributions, ICML 2019 [2]. A variational perspective on accelerated methods in optimization, PNAS 2016 --- Reply to Comment 1.1.1: Comment: Many thanks for the feedback. >**C1**: acceleration mechanism for the DPM solver... #### **R1**: The acceleration mechanism proposed the DPM solver aims to reduce the numerical error of the time integral for the semilinear probability flow ODE learned by a DDPM/VP-SDE (whose sampling path is not necessarily the optimal/shortest in the Wasserstein sense). In contrast, the drift term of the SDE learned by the OT theory and flow matching in the paper allows us to compute the geodesic/straight/shortest path in the Wasserstein space to accelerate the sampling speed. We will include the discussion in the paper. >**C2**: If the missing-data mechanism... #### **R2**: We are aware of the importance of the problem setup the reviewer raised. The MNAR scenario is indeed tricky to handle as it requires domain-specific knowledge. In our setting, the masks are generated independently of the data. This problem formulation is consistent with numerous prior works on time series imputation, as listed below. Nevertheless, we will explicitly clarify this issue in the paper. #### References: #### [1] Fortuin, et al. Gp-vae: Deep probabilistic time series imputation. AISTATS, 2020. #### [2] Tashiro, et al. Csdi: Conditional score-based diffusion models for probabilistic time series imputation. NeurIPS, 2021. #### [3] Bilos, et al. Modeling temporal data as continuous functions with stochastic process diffusion. ICML, 2023. #### [4] Wu, et al. Timesnet: Temporal 2d-variation modeling for general time series analysis. ICLR, 2023. #### [5] Chen, et al. Provably convergent Schrodinger bridge with applications to probabilistic time series imputation. ICML, 2023 #### [6] Wang, et al. Optimal Transport for Time Series Imputation. ICLR, 2025. >**C3**: Please provide a detailed proof ... #### **R3**: We can understand the flow dynamics from two different perspectives, namely, the Lagrangian specification and the Eulerian specification. In the Lagrangian specification of the flow field, we focus on the motion of the induvial particles. This can be depicted by an SDE or Eq (12). In the Eulerian specification of the flow field, we focus on the time evolution of the population/probability density, which is depicted by the Fokker-Planck equation. If we are able to make $X_t$ follow the $\frac{dX_t}{dt} = \mu(X,t)$ and satisfies the density function $p(X_t)$ at the same time (that is what we are trying to do in the paper), then the corresponding Fokker-Planck equation is satisfied (please refer to [7]). In this way, we can further formulate the base sampler and the sufficient statistic in the Rao-Blackwellization sense. We will include the detailed explanation in the paper. #### Reference: #### [7] Liu. Rectified flow: A marginal preserving approach to optimal transport. arXiv preprint arXiv:2209.14577 (2022). >**C4**: The claim that strong friction... #### **R4**: In the context of Langevin dynamics, when the friction is very strong, the timescale of inertial motion becomes extremely short, and the system quickly reaches terminal velocity where friction balances the other forces. Therefore, the acceleration becomes negligible (please see [8-9]). Note that $\frac{dx_t}{dt} = -\nabla U (x_t)$ is also the strong-friction limit of $ \lambda \rightarrow \infty$ of the second-order ODE $\ddot{x}_t + \lambda \dot{x}_t + \lambda \nabla U (x_t) = 0$ in Nesterov’s acceleration method (please refer to [10], SI Appendix, H1). In our method, we do not estimate the kinetic energy and potential energy at the same time to model the dynamics (unlike in [10-11]). Instead, we estimate the same flow dynamics from two different perspectives indepdently, $\frac{dx_t}{dt} =v_t(x,t)$ (related to the kinetic energy) and $\frac{dx_t}{dt} = -\nabla U(x_t)$ (related to the potential energy, which we also function can serve as the control signal, please see Appendix B4). This enables us to formulate a new Rao-Blackwellized sampler to improve the model’s performance. #### References: #### [8] Villani, Cédric. Optimal transport: old and new. Vol. 338. Berlin: springer, 2008, pp 646. #### [9] Bussi, et al. Accurate sampling using Langevin dynamics. Physical Review E, 2007. #### [10] A variational perspective on accelerated methods in optimization, PNAS 2016. #### [11] Accelerated Flow for Probability Distributions, ICML 2019 >**C5**: The computation of MAE... #### **R5**: Thanks for the comment. There is no discrepancy. We included the mask matrix in the calculation in exactly the same manner as in previous works (e.g., CSDI). We will clarify this in the paper. >**C6**: which "mean" are we referring to? #### **R6**: We do not consider the multimodal issue in the proposed approach, which makes the task numerically easier to solve. The mean we refer to is the marginal mean of the predictions, which we believe is a common practice in the literature as well. And this also is a reasonable choice, as it achieves the best empirical performance in terms of RMSE and MAE.
Summary: The article proposes a methodology for time series imputation using Wasserstein flows. The paper presents a number of theoretical elements required for their contribution, to then validate their method via simulations. ## update after rebuttal As I posted early in the discussion period, the rebuttal does not cover the concerns I raised. Therefore, I maintain my recommendation. Claims And Evidence: The paper starts (abstract and intro) by motivating their work with the drawbacks of diffusion models for data imputation, namely, computational complexity and slow convergence. However, their proposal is never assessed in these terms, but only in estimation performance (MSE). Methods And Evaluation Criteria: The presentation of their proposal is not clear. The paper starts revising the preliminaries (sec 2) to then present their methodology in two pages (from mid page 3 to mid page 5). It is hard to identify what in this presentation is novel and what is just an application of known results. The lack of clarity also applies to their diagram in Fig 1, in the perspective of this reviewer, that diagram provides no clarification or insights into the paper proposal. Overall, and since the formulation of the problem at the beginning of the paper, it is very difficult to understand what the authors are doing. Though the proposed method outperforms the benchmarks according to the experiments in Sec 4, the poor performance in the first toy example is contradictory - see fig 2. In this rather simple example, any interpolator (splines, GP, polynomial) will perform better than the shown results, which exhibit overshoot and nonstationarity of errors. There is also a number of typos and unprecise statements, for instance, line 149 (right) states to "solve" Eq. (7) - however, note that Eq. (7) is a definition, how can it be "solved"? Theoretical Claims: There are no theorems or other results of that kind and the connection with known results is vague (e.g., Rao-Blackwellisation) Experimental Designs Or Analyses: see "methods and evaluations above" Supplementary Material: The appendix is long and not sure if it contributes. For instance, what is the point of Fig 3? Relation To Broader Scientific Literature: This is a very active line of research (diffusion/probabilistic models for time series) and the paper recognises key works in the area. Essential References Not Discussed: no reference to non-diffusion interpolation methods, such as neural processes, transformer-based or graph-based methods Other Strengths And Weaknesses: The paper presents a promising and interesting idea, but the presentation is vague to the point that it is difficult to understand how such ideas are implemented, which hinders its reproducibility Other Comments Or Suggestions: no Questions For Authors: please refer to my criticisms above. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: ### **Methods And Evaluation Criteria** > line 149 (right) states to "solve" Eq. (7) - however, note that Eq. (7) is a definition, how can it be "solved"? #### **Response**: In Eq. (7) $\mu_t$ is unkonwn. ### **Supplementary Material** >For instance, what is the point of Fig 3? #### **Response**: Fig. 3 illustrates the connections and transformations among the different mathematical methods used in the proposed approach. --- Rebuttal Comment 1.1: Comment: The rebuttal does not cover the concerns I raised (e.g., the claims of addressing computational complexity and slow convergence in DMs or the performance in Fig 2). Therefore, I maintain my recommendation. --- Reply to Comment 1.1.1: Comment: We kindly request the reviewer to read our paper and responses to the comments from the other reviewers carefully, and make a fair evlaution of our paper accordingly.
Summary: This paper introduces Conditional Lagrangian Wasserstein Flow (CLWF), time series imputation model that leverages optimal transport theory and Lagrangian mechanics. Following the principle of least action, CLWF learns a velocity field by minimizing kinetic energy, effectively finding the shortest path in probability space to generate missing values​. Experiments on synthetic and real-world datasets show that CLWF achieves competitive imputation performance compared to state-of-the-art methods while requiring fewer sampling steps. Claims And Evidence: - The authors claim that their method(CLWF) achieves competitive imputation performance compared to state-of-the-art methods while requiring fewer sampling steps, thereby offering faster inference than diffusion-based approaches​. This claim is well supported by extensive experiments. - The authors claim that they show the connection between proposed method and SOC,path measures. But in my opinion, more supporting detail should be included. In this version of manuscript, I cannot find detailed relationship between CLWF and these concepts. Methods And Evaluation Criteria: I have checked that methods and evaluation criteria make sense for time series imputation. Theoretical Claims: There are some theoretical concepts in this paper, but not including theorem specifically requires proof. Experimental Designs Or Analyses: This work follows experimental designs from prior work in general. I think experimental designs are sound. Supplementary Material: I have reviewed supplementary material and confirmed that this includes necessary information for reproducing experiments. Relation To Broader Scientific Literature: This paper employs flow matching and optimal transport for time series imputation. This is new in this literature and will be valuable. Essential References Not Discussed: To the best of my knowledge, the authors properly cite relevant papers except for concurrent work [1] . [1] Wang et al., "Optimal transport for time-series imputation" , ICLR 2025 Other Strengths And Weaknesses: **Strengths** - This paper is well written, easy to follow. Description of relationship among SOC,OT, flow matching in supplementary is helpful for broad audience. - Timely topic, novel approach in time series imputation using flow based method, reducing cost of diffusion model for imputation. - Decent performance on real world benchmark. **Weaknesses** - I think it would be better to discuss the difference and relationship between similar models like CSBI(e.g., including algorithm in supplementary for comparing actual difference in computation) Other Comments Or Suggestions: Please refer to other sections. Questions For Authors: - How this paper relates to the recently published Hao et al., "Optimal transport for time-series imputation" , ICLR 2025 - It would be interesting to apply this for forecasting or extraploation tasks. - What is the benefit of framing this imputation problem within the framework of Lagrangian mechanics? What is difference between this method and directly applying flow matching method? - I think including actual runtime for each method will be helpful. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for the insigtful comments. ### **Claims And Evidence** >1. The authors claim that they show the connection between proposed method and SOC,path measures. But in my opinion, more supporting detail should be included. In this version of manuscript, I cannot find detailed relationship between CLWF and these concepts. #### **Response**: In CLWF, we treat the data sampling process as a controlled SDE, which is the formulation of an SOC problem (Eq. 18 in Appendix B.1). And the control signal can be the energy function estimated by the TDVE, which is proved in Appendix B. The loss function of CLWF also minimize the cost function of the SOC problem (Eq. 19 in Appendix B.1). Further, the control function can be used as the sufficient statistic to formulate a new Rao-Blackwellized sampler (please refer to Appendix C) to improve the model’s performance. Finally, the control signal of the SOC framework formulates a new path measure (i.e., the sampling path is now controlled) according to the Girsanov theorem. Please refer to Appendix B.4 and Fig. 3 in Appendix C.1. >2. There are some theoretical concepts in this paper, but not including theorem specifically requires proof. #### **Response**: Thanks for the suggestion. We will add the detailed proof in the paper. ### **Essential References Not Discussed** >To the best of my knowledge, the authors properly cite relevant papers except for concurrent work [1]. #### **Response**: Thanks for the suggestion. [1] proposed a novel specialized static OT metric which uses DFT to transform the original time series data into the frequency domain for imputation, while CLWF sloves the dynamical OT problem. We will discuss and cite the paper in the main text. ### **Weaknesses** >I think it would be better to discuss the difference and relationship between similar models like CSBI(e.g., including algorithm in supplementary for comparing actual difference in computation) #### **Response**: Thanks for the suggestion. Both CSBI and CLWF try to solve the dynamical OT/Schrodinger bridge problem. CSBI solves the SBP via the diffusion model and approximate iterative proportional fitting (IPF); while CLWF adopts the Lagrangian dynamics framework to solve the sampling problem using flow matching and further improve the model's performance via Rao-Blackwellization using a TDVE. We will discuss the difference and relationship between CLWF and similar methods in the main paper and give the detailed algorithm comparison in the appendix. ### **Questions For Authors** > How this paper relates to the recently published Hao et al., "Optimal transport for time-series imputation" , ICLR 2025 #### **Response**: Please see above. > It would be interesting to apply this for forecasting or extraploation tasks. #### **Response**: Thanks for the suggestion. We will explore these tasks in future work. >What is the benefit of framing this imputation problem within the framework of Lagrangian mechanics? What is difference between this method and directly applying flow matching method? #### **Response**:Both Lagrangian mechanics and flow matching enables us obtain the shortest path for the dynamic optimal transport problem. However, compared to the flow matching method which only considers the drift/velocity, the framework of Lagrangian mechanics includes both the drift term and control term which is derived from the potential energy term, which as a result provides us a more general sampling framework. This enables us to formulate a Rao-Blackwellized sampler to further reduce the variances of the sampler; therefore, the performance of the base sampler (based on flow matching) is improved. Moreover; the framework of Lagrangian mechanics also enables us to bridge the gap between the optimal transport and SOC problem. Please also refer to the references listed below. ##### [1] Liu, G.-H., Lipman, Y., Nickel, M., Karrer, B., Theodorou, E., and Chen, R. T. Generalized Schrodinger bridge matching. In The Twelfth International Conference on Learning Representations, 2024. ##### [2] Koshizuka, T. and Sato, I. Neural Lagrangian Schrodinger bridge: Diffusion modeling for population dynamics. In The Eleventh International Conference on Learning Representations, 2023. ##### [3] Neklyudov, K., Brekelmans, R., Severo, D., and Makhzani, A. Action matching: Learning stochastic dynamics from samples. In International Conference on Machine Learning, pp. 25858–25889. PMLR, 2023a. ##### [4] Neklyudov, K., Brekelmans, R., Tong, A., Atanackovic, L., Liu, Q., and Makhzani, A. A computational framework for solving wasserstein Lagrangian flows. arXiv preprint arXiv:2310.10649, 2023b >I think including actual runtime for each method will be helpful. #### **Response**: Thanks for the suggestion. We show the actual runtime below. Method| CSDI | DSPD-GP| Base | RB | Resampling | Resampling + RB | --- | --- | --- |---- |---- | --- |---- | runtime/s| 354.65 | 356.82|141.86 |276.75 | 142.73 |277.62 --- Rebuttal Comment 1.1: Comment: Thanks for answering to my questions and giving additional experiments. I am happy to raise my score since this makes actual gain in the performance --- Reply to Comment 1.1.1: Comment: We are glad that our response addressed your questions. Thank you very much for raising the score!
null
null
null
null
null
null
Convex Markov Games: A New Frontier for Multi-Agent Reinforcement Learning
Accept (poster)
Summary: The paper proposes a new model for multi-agent interaction called Convex Markov Games (CMGs), that generalizes the concept of Markov Games to convex objectives of the induced state distribution. The authors characterise the existence of mixed and pure Nash Equilibria and propose a simple algorithm to find them. Finally, they show how such an algorithm can be used to enforce non-trivial behaviours on many relevant instances subsumed by cMGs. ## Update after rebuttal I went through the authors' responses and other reviews briefly, the authors successfully addressed my main concerns regarding the work and more crucially reviewer Bm64's concerns, so I would recommend accepting the paper. Claims And Evidence: Yes, the proofs are rigorous and the empirical evidence is convincing. Methods And Evaluation Criteria: The paper is more of an exploratory work, but the empirical instantiations of the convex utility are convincing and relevant. Theoretical Claims: Yes. Prop. 1, Th. 1, Th.2 have been checked, I haven't found any issue with them. Experimental Designs Or Analyses: Yes, all the ones in Section 5. They are sound and show valid results. Supplementary Material: Yes, up to section E excluded. Relation To Broader Scientific Literature: This work is a fundamental contribution for the convex generalisation of Markov Games and their characterisation. It shows that common arguments can be applied to these games as well (results in Prop.1, Th.1, Th.2) with success. Essential References Not Discussed: No Other Strengths And Weaknesses: Strenghts: - The paper is extremely well-written and rigorous in the exposition as well. - The decision-making problem is relevant and the empirical corroboration is rather interesting. Weaknesses: - The theoretical contributions are mostly built on a combination of previous results, this is far from being an issue per se, as they are applied to novel problems, but I would make this fact slightly more explicit in the main paper as well, like directly citing references in theorem statements. Other Comments Or Suggestions: - I would reference the proof of Eq 12 in Appendix (Corollary 1) - I would anticipate the description of the Projection operator after Theorem 2. - I would describe the properties (and limitations of Algorithm 1 as described in the conclusions) more explicit. Questions For Authors: - Is the main outcome of the paper that convex MGs are structurally not to diverse from their non-convex counterpart? As most of results for linear games do transfer. In case, I would make this statement more clear in the introduction. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Dear reviewer, thank you for your positive endorsement and helpful comments. We are pleased to hear you not only appreciate our proposed generalization of Markov games, but also checked and found **“the proofs are rigorous and the empirical evidence is convincing”**. Thank you for saying the **“paper is extremely well-written and rigorous in the exposition”**; we will further improve the writing with your comments with proposed changes listed below. - We can add a comment that Prop 1, Th 1, and Th 2 are novel applications of Glicksberg, Debreau/Kosowsky, and Gemp to the cMG framework. - We will reference Corollary 1 as you say. - We can move the discussion around (13) to be after Theorem 2. - We can move/copy the last paragraph of the conclusion to a limitations paragraph immediately after the Algorithm 1 code block. We can also mention some impossibility results on computing NEs and point out that PGL is not guaranteed to converge to NEs. Regarding the main outcome of the paper, we are not sure we want to make a general statement that cMGs are structurally similar to MGs. It is true that pure NEs still exist, but value functions do not. And at least the reason pure NE exists requires more sophisticated and different proof techniques while prior existence proofs leveraged the existence of value functions. However, there is a key point of similarity between them. cMGs (similar to MGs) admit a clear solution concept that enables a formal computational investigation, which hopefully similarly to the case of MGs will allow for many theoretical and practical research advances. --- Rebuttal Comment 1.1: Comment: Given the Rebuttal, I am positive with confirming my score, provided that the authors address the clarity issues listed by the Reviewers and include the relevant missing literature outlined by Bm64. Additionally, I would strongly suggest to include a reference to the concurrent [1], which introduces a definition of convex Markov Games as well, even though with a slightly different definition and with a different focus in contributions. [1] Zamboni et al., Towards Principled Multi-Agent Task Agnostic Exploration, 2025
Summary: The paper presents convex Markov games, a framework that extends Markov games by generalizing the players' linear utilities with any convex function of their state-action occupancy measures. A similar generalization recently studied in single-agent problems, from MDP to convex MDPs. Here the same extension is done for multi-agent problems. First, the paper states the existence of mixed-strategy Nash equilibria in convex Markov games. Then, the paper presents a method to compute approximate equilibria in practice. The method is based on projected gradient descent of an upper bound to the exploitability of each agent policy, which is in turn related to Nash equilibria. The method is evaluated in a variety of small-scale domains with knowledge of the transition parameters, comparing the proposed algorithm with various (self-designed) baselines, such as simultaneous gradient descent and black-box exploitability minimization. Claims And Evidence: There is a potential error in the way pure/mixed strategies are defined. The paper says "If all probability mass of $\rho$ is on a single policy $\pi_i$ [also stochastic], we call $\rho$ a pure-strategy and write $\pi_i$ directly". I am not sure this makes sense, because the occupancy measure of any stochastic policy can be equivalently seen as the occupancy measure of a mixture of policies. The policies in the mixture do not even need to be stochastic. This is a classical result that can be found in (Puterman 2014). If this is correct and I am not missing something, it would mean that mixed strategy and pure strategy are the same thing here. Hopefully, this is not too problematic, but it shall be fixed (or explained). ---- Solved below: Increasing my score to accept Methods And Evaluation Criteria: Since this is a new framework, there aren't obvious baselines to compare the proposed algorithm to. However, this makes the significance of the algorithmic contribution unclear. Is it problematic to adapt any other algorithm that works for convex MDPs to the multi-agent version? There is not much convincing discussion on that. Theoretical Claims: I didn't carefully check the correctness of the proofs and the theory beyond what is included in the main paper. I reported a potential issue above. Experimental Designs Or Analyses: The experimental analysis is carried out in small-scale domains with knowledge of the transition model. It is fine at this stage of development, but perhaps mentioning multi-agent RL in the title can be misleading on the kind of challenges addressed in the paper. Supplementary Material: I didn't check the supplementary material. Relation To Broader Scientific Literature: This looks like a very natural extension of the recent stream of works on convex utilities in RL. In terms of results, I feel like a tighter connections with the theory of Markov games could be drawn here. I am not an expert of the latter literature, but it is definitely hard to grasp the additional computational/statistical challenges introduced by convex utilities w.r.t. what is known in Markov games. Essential References Not Discussed: The account of prior works in convex/general utilities RL is very limited. A few related references are missing, I am providing a (possibly incomplete) list below: - Cheung, Regret minimization for reinforcement learning with vectorial feedback and complex objectives, 2019 - Cheung, Exploration-exploitation trade-off in reinforcement learning on online Markov decision processes with global concave rewards, 2019 - Geist et al., Concave utility reinforcement learning: The mean-field game viewpoint, 2021 - Mutti et al., Convex reinforcement learning in finite trials, 2023 - Barakat et al., Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space, 2023 - Moreno et al., Efficient model-based concave utility reinforcement learning through greedy mirror descent, 2024 - Moreno et al., MetaCURL: Non-stationary Concave Utility Reinforcement Learning, 2024 - Prajapat et al., Submodular reinforcement learning, 2024 - Celikok et al., Inverse Concave-Utility Reinforcement Learning is Inverse Game Theory, 2024 - De Santi et al., Global Reinforcement Learning: Beyond Linear and Convex Rewards via Submodular Semi-gradient Methods, 2024 I think those paper shall at least be mentioned in the manuscript and, in some cases, discussed in details. Other Strengths And Weaknesses: Strengths - Natural extension of prior work in single-agent convex utilities decision making; - There seems to be several interesting applications fitting the framework; - The algorithm is promising, although it lacks intuition and clear motivation. Weaknesses - An approximate algorithm is given before even discussing the computational complexity of the original problem; - Which guarantees does the algorithm have in general? Can we give stronger guarantees under common game-theoretic assumptions (e.g, zero sum, potential games...)? - Issue with the definition of mixed/pure strategies mentioned above; - Limited experimental analysis, without competitive baselines and RL. This paper addresses a very nice direction, which may have potential impact on the understanding of decision making beyond linear utilities, building on prior results on single-agent settings. I am currently providing a borderline evaluation given my confusion over the connection between the results reported in the paper and related works on Markov games and convex MDPs. However, the framework may be an important contribution in itself and I may be swayed towards acceptance with a convincing author response. Other Comments Or Suggestions: The introduction could place more clear credit to convex MDP literature. To the best of my knowledge, the problem has been introduced by Hazan et al (2019) and developed in several subsequent works. An overview of previous results could help clarify the unique challenges of the multi-agent setting. Same goes for the prior works and results on Markov games. Moreover, the literature of convex RL includes various alternative formulations of the problem, such as convex RL (Hazan et al. 2019), finite-trial convex RL (Mutti et al. 2023), submodular RL (Prajapat et al. 2023). Perhaps mentioning why a formulation has been preferred to another for the extension to Markov games could be useful. The introduction could also give more intuitive explanations on how a convex MG works, which is only introduced in the subsequent section. Questions For Authors: What is the complexity of computing equilibria in this setting? It appears that the problem is intractable even for general-sum Markov games. I guess it is not tractable for general convex as well. Projection over the tangent space of $\mathcal{U}_i$: Can you provide some intuition on the meaning of the projection? Why is this necessary, beyond the link with exploitability upper bound? The experiments are a little hard to process: How do we know that PGL is a good algorithm for cMG? Is the symbol $r_i$ introduced somewhere? I guess it is a reward: Why do we need it in the first place? Is PGL guaranteed to converge to an approximate equilibrium? The title mention RL, but there's not much RL in the paper. How can ideas be translated to large problems or/and unknown transitions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer, thanks for your comments and highlighting our work as a **“Natural extension of prior work in single-agent convex utilities decision making”** with **“several interesting applications”**. Your feedback will help us greatly improve the paper. Pure/Mixed Strategies and Deterministic/Stochastic Policies: Thank you for raising this point of confusion. Reading your comment, we agree factually with each technical statement, but are confused by your inference. Recall that we define strategies in terms of policies, not occupancy measures. While it is true that a pure strategy (a singleton stochastic policy) and mixed strategy (a distribution $\rho$ over many policies) can both induce the same occupancy measure, these strategies are not the same from the policy view (by definition). We discuss the importance of this discrepancy above Thm 1. In addition, we understand that optimal policies can always be deterministic (stated in line 78, col 2), however this does not mean that there always exists an NE that contains only deterministic policies (is that your concern?). Recall that cMGs generalize MGs which generalize normal-form games like rock-paper-scissors which famously only have an NE in stochastic policies. Baselines: Sim and RR represent, in fact, two ways of extending (variational) policy gradient (VPG) to the multi-agent setting. Zhang et al. ‘20 proved that VPG solves cMDPs. They study the model-free RL setting whereas we study the model-based setting where we can compute exact policy gradients per-player with differentiation. Complexity: The Bellman equation no longer holds in cMDPs and hence not in cMGs. Previous NE-existence results on MGs [Fink, ‘64] leveraged state value functions, which we cannot do here. We should emphasize the loss of Bellman optimality, so thank you for raising this issue. In addition, computing NEs in MGs is PPAD-hard [pg 3, “The Complexity of Markov Equilibrium in Stochastic Games”]; cMGs are at least as hard. Therefore, PGL expectedly lacks guarantees although it mimics a standard protocol for solving games (McKelvey & Palfrey, ‘95; ‘98; Gemp et al., ’22; Eibelshauser & Poensgen, ‘23). We expect we can give guarantees under certain assumptions (potential, zero-sum), but we leave that to future work. Note that many of the domains we study empirically (e.g., IPD) satisfy neither assumption. CURL Literature: Thanks for your pointers to cMDP/CURL research. This list also acutely highlights that, despite the strong interest in CURL, *no one* has studied the setting where $n$ CURL agents interact. We introduce the necessary scaffolding upon which to extend single agent CURL to their multi-agent analogues, similar to how Littman’s MGs generalized RL. Each of the references you provide constitutes an important single-agent advancement that could be interesting to examine in conjunction with other learning agents and we are happy to raise them as important directions for future work. Hazan vs Alternatives: First, note we credit Hazan in the intro. We generalize from cMDPs because they are convex programs (CPs) with properties critical to our proofs (the solution set of a CP is convex [Thm 1]; convex losses allow suboptimality bounds in terms of gradient norms [Thm 2]). In contrast, finite-trial convex RL is NP-hard in the single trial setting and for submodular RL, "the resulting optimization problem is hard to approximate". Tangent-space Projections: Consider a 1-state, 2-action MDP with policy $[p_1, p_2]$; each action earns +1 reward. The expected reward is $p_1 + p_2$ and the gradient is $[1, 1]$. Note that any policy is optimal, yet the norm of the gradient is $\sqrt(2)$. However, if we project the gradient onto the tangent space of the simplex (i.e., subtract the mean of the entries), we find it is $[1, 1] - [1, 1] = [0, 0]$ which has zero norm. For more info, see Linear and Nonlinear Programming by Luenberger, 1984, Sec 12.4, p 364. Empirical Support for PGL: Figure 3 demonstrates PGL reaches low $\epsilon$ (equiv. well approximates NEs) in 3 games. We continue to report low $\epsilon$ in the other games. We also discuss how PGL not only finds NEs, but ones with interesting behavioral properties (e.g., Tables 2, 3, 4). Model-based vs Model-free RL: We point out “model-free” RL as future work, but will further qualify that PGL is “model-based” in the intro. Note that our title highlights the cMG framework, not PGL. We are happy to point out “model-free” as an important direction for future work within the cMG framework. Minor: Yes, $r_i$ can be interpreted as a reward vector. --- Rebuttal Comment 1.1: Comment: Dear authors, Many thanks for getting back to me with thorough replies. I am providing a few follow-up comments below to make sure some of the points I raised are clear enough. I do not think any of my concern is a "deal-breaker": The paper is interesting and, although many aspects could be further studied, I may be applying an unfair standard here. I will engage with other reviewers and reconsider my evaluation. COMMENTS 1) Pure/Mixed strategies: The authors might be right, but the rock/scissor/paper example makes me think my point wasn't clear. Of course I see the difference in playing deterministic policies or stochastic policies. Indeed, I believe the right definition of pure strategy is "each player is playing a deterministic policy" and a strategy is mixed when some players are playing stochastic policies. However, according to the definition in the paper, players can play "pure strategies" with stochastic policies, while in mixed strategies they are playing mixture of stochastic policies. To me both look like mixed strategies: I am wondering if the paper is actually only analyzing mixed strategy and mixed strategy in disguise. The value of the game does not depend on the policies given the occupancy, which means that for every mixed strategy I have a pure strategy (of stochastic policies) inducing the same occupancy, hence the same value. Perhaps there is a difference on what happens with unilateral deviations from a fixed strategy: Even if the value is the same, deviating from a "pure" strategy of stochastic policies is different than deviating from a "mixed" strategy of mixtures of stochastic policies. I do not see it, so I am asking the authors if that is the case. 2) Hazan vs alternatives: This point would be stronger if the paper was giving an algorithm for exact NE, which clearly cannot be obtained for other formulations that are NP-hard even in the single-agent setting. However, the paper is giving an approximate solution. Finite-trial and submodular may also admit "approximate" solutions (not approximating the global optimum, just a "good" solution)... 3) Guarantees of PGL: The guarantees I was looking for are not empirical, but theoretical. Do we have any indication that PGL is guaranteed to converge to an equilibrium beyond empirical evidence in some domains? 4) Model-based vs model-free RL: I am worried I have to disagree here. RL is a framework for decision-making **from interactions**. Even in model-based RL, the model of the environment is learned from samples. If the model is known and the algorithm is not sample-based in any way, I do not think it shall be called RL. --- Reply to Comment 1.1.1: Comment: Thank you again for your comments and for your continued engagement. 1. Pure/Mixed strategies. We describe a cMG below in an effort to elucidate some of your concerns. We are assuming you have raised this discussion of definitions as it relates to the NE existence results, but please correct us if that assumption is incorrect. TL;DR: different policies are not generally exchangeable in games; even though different policies can induce the same occupancy measure (and same value), they induce different environments for the other players and so affect the NEs. Consider a cMG with two players. Player 1 stands on a safe podium and controls a switch. If the switch is on, the ground is lava. If the switch is off, the ground is safe. Player 2 stands on a safe spot and can observe the switch. Player 1’s utility is to maximize the long-run entropy of lava occurrence summed over the state-space as well as the entropy of player 2’s state distribution, i.e., aim for [50% lava, 50% no lava] and for player 2 to explore beyond their safe spot. Player 2 receives a small reward for staying in their safe spot, a very large negative reward for stepping on lava, a very large reward for collecting a treasure from outside their safe spot, and zero otherwise. Let player 2’s current policy be to never move from their safe spot. Consider two different policies (both with the same value) for player 1. In policy A (a mixture of deterministic policies), player 1 either turns the switch on at the beginning of the cMG or leaves it off. In policy B (a singleton stochastic policy), player 1 flips the switch with 50% probability at every time step. Both of these induce the same occupancy measure and also the same value for all players given player 2’s current policy. However, player 2’s best response to policy A, call it policy A’, might be to stay inside when they see the switch is on (because given player 1 plays A, they know it is for the whole episode), but when they see it is off (for the day), they venture out to collect the treasure. In contrast, player 2’s best response to policy B, call it policy B’, is likely to always stay inside and collect their mild reward. The takeaway is that although policy A and B induce the same occupancy measure, they actually have very different effects on player 2’s decision making. Both (A, A’) and (B, B’) are NE, however, (B, A’) and (A, B’) are not. It’s our understanding that you are concerned (B, A’) and (A, B’) might trivially exist as NE based on your reasoning, however, note that A and B are not exchangeable even though they achieve the same value. Prop 1 proves (somewhat trivially) that NEs of type (A, A’) exist (although not necessarily with such low support, e.g., mixtures of only 2 policies vs infinite policies). Theorem 1 proves NEs of type (B, B’) exist and requires more careful attention. 2. Note that although convex MDPs admit efficient approximate algorithms, cMGs, in general do not. Even computing $\epsilon$-NE for constant $\epsilon$ is PPAD-complete for n-player, general-sum: “Inapproximability of Nash Equilibrium”, Rubinstein ‘15. 3. No, PGL has no theoretical guarantees, but that is expected. Note that it is known that no dynamics converge to NE in n-player, general-sum: "An impossibility theorem in game dynamics", Milionis et al. '23. 4. We agree. PGL does not meet the precise definitions of a model-based RL method. We just mean it corresponds to the infinite sample limit of a model-based approach. We do not claim anywhere in the current draft that PGL is an RL method, so we will leave it that way. Thank you again for engaging with us and for your interesting questions. Since, as you state, you find the paper interesting, many aspects of which could be further studied, and none of your concerns is a deal breaker, could you please consider increasing your score accordingly.
Summary: The paper studies a generalized model of Markov games, called convex Markov games (CMG). The difference between CMGs and standard Markov games is that the former adopt convex functions as the players' utility functions, which are more general than linear functions used by the latter. More specifically, each player's utility is a convex function of the state-action occupancy probabilities. The paper studies n-player CMGs. It proved that a Nash equilibrium---in particular a pure one---always exists in every CMG. It then presented a projected-gradient loss minimization algorithm to compute an approximate equilibrium. Finally, experiements were conducted to evaluate the results, showing novel solutions that approximate human play. Claims And Evidence: The paper provided proofs and experiments to support the claimed results. The proofs look sound but could possibly be optimized a bit more. Methods And Evaluation Criteria: The methods used look reasonable to me. Theoretical Claims: The proofs look sound but could possibly be optimized a bit more. Experimental Designs Or Analyses: The experiment designs look reasonable to me. I don't find any issues with this part. Supplementary Material: I went through the proofs but did not verify everything in great detail. Relation To Broader Scientific Literature: Convex Markov games are a natrual extension to Markov games, so they are closedly related to the areas of game theory, sequential decision making, and reinforcement learning. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strength: Convex Markov games are a well-motivated model. The paper made valuable contributions towards understanding such games. Weaknesses: - The main theorem, Theorem 1, is an important result, but the proof relies heavily on the result by Kosowsky, which was intended for an even more general class of games. While I have no criticism about this proof approach, I feel it would still be nice to see if a simpler proof can be constructed without relying on the result by Kosowsky. A proof specifically for CMGs could possibly help us to understand CMGs better. - A more critical issue is about the clarity of Section 4. The current presentation makes it hard to understand the computational approach. For example, while the authors introduced exploitability, what does it mean for the computing of a Nash? I think a high-level explanation about the general approach in several sentences could be helpful. Additionally, what does the bound presented in Theorem 2 mean? Somewhat abruptly, the task reduces to minimizing the loss term in Equation (11). But even if this term is minimized, there is still a log term in the upper bound of $\epsilon_i(\pi)$. So what exactly is the relation between a minimum solution to this problem and an equilibrium? Does that mean the log term is unimportant? In Theorem 2, $\pi$ is said to be an approximate equilibrium but without defining what an approximate equilibrium mean. The part below Equation (12) is also hard to follow, without sufficient useful information about the intuition. The Opt operator in Line 3 of Algorithm 1 also requires more explanation. What specific optimization problem does it solve, with respect to $\pi$ and the gradient parameter? In summary, while the overall structure and organization of the paper is reasonable and the problems studied are interesting and well-motivated, the paper could have been made more accessible and informative. There are some clarity issues and some parts lack rigorous definitions and necessary details. Other Comments Or Suggestions: I don't have any other comments besides what I commented above. Questions For Authors: See Other Strengths And Weaknesses. There is no need to answer every question though; some of them are just examples of the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear reviewer, thank you for your constructive feedback. We are pleased to hear you find the proposed convex Markov Game model **“well-motivated”** and the proof of existence of pure Nash equilibria an **“important result”**. We appreciate the need for clearly explaining this important result and our proposed algorithm. Regarding NE existence, we will include a simplified version of Theorem 1’s proof based on Debreu that avoids some of the *topological* discussion of Kosowsky (although contractibility is still a key component). Note that unlike the NE existence proof for (vanilla) Markov games which makes use of the Bellman optimality of state-value functions and appears tailored “specifically for MGs”, Bellman’s equation no longer holds in convex MDPs and so this proof technique is not available to us. We understand your point though how a proof can help understand the domain, but, in this case, developing a more “RL-flavored” proof appears to us to be non-trivial. Thank you for pointing out these points of confusion in Section 4. Your comments will help us to improve the writing. We will do our best to answer your questions anyways in case you are able to let us know if they make the approach clearer. Approximate NE: NE is precisely a profile $x$ such that exploitability (9) equals zero. Every policy profile is technically an "approximate equilibrium"; $\epsilon$ measures the level of approximation. This is analogous to saying every policy is an "approximate" solution to an MDP, however, the level of approximation is what is relevant. If $\epsilon = 0$, the profile is no longer approximate, it is exactly an NE. High Level Approach: It should also be clear from the $\max$ in (10) that exploitability (9) is always non-negative. Therefore, one can imagine solving for an NE (where $\epsilon = 0$) by minimizing exploitability, i.e., using it as a loss function. Exploitability is non-convex (this is not obvious and we will add a comment) hence vanilla gradient descent is not guaranteed to find a global minimum. However, we leverage "temperature annealing" ideas that have been successful in several other game classes (NFG/EFG/MGs) and show they can be successful empirically for cMGs as well. Log Term: The log term is still important and appears again in (12), which relates the minimum of $\mathcal{L}^{\tau}$ (11) to an equilibrium (9). In order to shrink the log term we must decrease the temperature $\tau$, which is why we anneal $\tau$ with a schedule in experiments (see input to Algorithm 1). Opt Notation: The “Opt” notation in Algorithm 1 is meant to serve as a first-order optimization oracle, e.g., gradient descent, for minimizing $\mathcal{L}^{\tau_t}$. Opt($\pi, \nabla_{\pi} \mathcal{L}^{\tau_t}$) just means that the user supplies an initial policy $\pi$ and the gradient operator, and the first-order oracle performs several descent iterations before returning a new policy.
Summary: The authors introduce a class of convex Markov games that allow general convex preferences, prove that pure strategy Nash equilibria exists, and provide a gradient-based approach to approximate this equilibria, noting the computational difficulty of finding the actual equilibria with general solvers. The significance of this work is from extending convex MDPs to multi-agent settings and providing rigorous analysis to show the existence of NE. Claims And Evidence: Every proposition, lemma and theorem are accompanied by clear proofs and relevant references. It is evident that the authors dedicated significant time to examining various mathematical theorems (such as fixed-point theorems, convex programming properties, etc.) to substantiate their claims. Derivations do tend to skip quite a few steps, thought I understand that authors had to be succinct due to page limits. Since this manuscript is dense with notations and mathematical derivations, added more detail in the appendix would've been great. Methods And Evaluation Criteria: The authors provide a detailed overview of their simulation domains, explain each aspect of the policy, and offer clear insights into how each policy characteristics should be interpreted. They provide brief descriptions of what "creativity", "fairness", etc., means, and their remarks under each subsection are thorough. One comment as a reader is that, it took a few iterations to understand how each application is relevant to different aspects, and what the authors mean by "creative" or "fair" policies. For "creativity", I'm still not sure why the presented policy in the path finding domain is considered creative. For "fairness", I'm not sure why fair visitation of states is important in practice, since that may not always be a desired characteristic. I do think the "safety" subsection is convincing and easy to follow. Theoretical Claims: I reviewed all derivations in the main body, and proofs of theorems 1, 2, and 3 in the appendix. See comments under "Claims And Evidence". Experimental Designs Or Analyses: See comments under "Methods And Evaluation Criteria" Supplementary Material: I reviewed all derivations in the main body, and proofs of theorems 1, 2, and 3 in the appendix. Relation To Broader Scientific Literature: Extending convex MDPs to multi-agent settings enables scalable approaches to handling the complex interactions between multiple agents. This extension provides a foundation for designing more robust and tractable solutions in settings with multiple decision-makers. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The authors did a great job pointing out the gap in literature, and highlighting their contribution from early on. By doing so, the authors set the tone for the manuscript, and made it easier for the reader understand the manuscript's purpose and relevance. The paper is well-structured, with the authors effectively establishing the theoretical foundation through clear proofs and intuitive explanations of how each expression is derived. They then offer a thorough and insightful analysis of their simulations. See comments under "Claims And Evidence" and "Methods And Evaluation Criteria" for additional detail. Meanwhile, I do fee that too much information was front-loaded in section 5. It may be due to how I process information, but I felt that not all information provided at the beginning were necessary for each subsection, so deferring some for specific subsections would've been better for readability. Other Comments Or Suggestions: 1. At the beginning of section 5, authors mentioned "four" baselines, and I see that they immediately list them out (min \epsilong, Sim, RR, and SGAME). But it looks like these four baselines are only mentioned under "creativity", so I just want to note that I was a little confused when going through the rest of section 5. I personally would've been okay with the mention of baselines for each specific subsection. 2. Page 6, in the last sentence of the first paragraph under "Remark", should it be just "player" or "players" instead of "player's"? Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer, thank you for your constructive feedback. We are glad to hear that the significance of our work carried through and that you think we **“did a great job pointing out the gap in literature”** (convex MDP + multi-agent). We also understand your comment regarding the layout of the experiments section and can see how “front loading” so much context can disrupt the flow and actually make the remaining subsections focused on each domain harder to understand. We will move (or copy) some of this information to each subsection where it is immediately relevant to the reader. We will also add more detail to the appendix, in particular, adding an alternative (arguably simplified) Nash equilibrium existence proof based on Debreu ‘52 rather than Kosowsky ‘23. Regarding “creativity” and “fairness” terminology: The terminology "creative" is borrowed from prior work (Zahavy et al., ‘23). The rationale is that adding the entropy bonus encourages exploring the whole state-action space so the agents can possibly uncover new "creative" solutions. We can add this explanation to clarify. The “fairness” experiment looks at fair visitation of the plays “Bach” and “Stravinsky”; this game, also known as “Battle of the Sexes” in the classical literature, exhibits two pure equilibria where only one player’s preferred outcome is achieved and one mixed equilibrium where both outcomes are achieved in expectation. We hope our fairness metric makes sense in that context, but we agree “fairness” of state-visitation is not suitable in every domain. Minor: Good catch. “player’s” should be “players”.
null
null
null
null
null
null
On the Impact of Performative Risk Minimization for Binary Random Variables
Accept (poster)
Summary: The paper devises two metrics to study the path taken for models in a performative setting and applies them in the study of binary random variables (shifted Bernoulli). Claims And Evidence: - They claim that they analyze the impact of Performative Risk Minimization. However, as far as I can tell, there is no such thing in the previous literature (especially not in Perdomo et al. (2020)) and the paper never defines what they understand by PRM. They cite three papers but those solve different optimization problems: Perdomo et al. (2020) and Brown et al. (2022) optimize for the stable point, while Ray et al. (2020) optimizes for the optimal point. It is unclear in the beginning of the paper what they are trying to do and what is their objective. In addition, they claim that Performative Prediction tries to minimize the objective function presented in (1). However, the standard definition of Performative Prediction in Perdomo et al. (2020) does not fall into this equation. It is unclear how the other papers relate to equation (1) too. - The usefulness of the metrics cannot be fully supported with just the example of binary random variables. I would suggest extending the analysis for other problems (like what other papers in the field – like Izzo et al. (2022) – do). Furthermore, there is lack of motivation of the binary problem; why is it relevant for Performative Prediction? A slight justification only appears in the last section with the use cases. The case studies are too simplistic to be considered realistic and, therefore, it is hard to understand why this is an interesting problem for Performative Prediction. Methods And Evaluation Criteria: - The claims on the binary random variables are supported mainly by theoretical evidence, which make sense in this case. Theoretical Claims: - I have not checked the correctness of proofs. The assumptions (loss and problem) of the problem should be better explained (see **Claims and Evidence**) Experimental Designs Or Analyses: - I did not reproduce the results of experimental experiments Supplementary Material: - I didn't check Relation To Broader Scientific Literature: - This paper adds a couple of metrics to study the bias of estimators in Performative Prediction. - The link of the optimization algorithm they use “performative risk minimization” and the loss is not very clear - The paper seems like an extension of examples used before in papers like Perdomo et al. (2020) or Izzo. et al (2022). Nevertheless, in these papers they used them to exemplify their claims, not as their main results. Essential References Not Discussed: - The paper doesn't discuss Lin & Zrinic (2024), Izzo et al. (2022) and *Performative Prediction with Neural Networks* (Mofakhami et al. 2023) when saying “To our awareness, the only work that studies properties beyond accuracy is that of Jin et al. (2024)”. Works in Performative Prediction study the Performative Loss generally, which sometimes might not transfer into accuracy (see *Performative Prediction in NN*). Nevertheless, others like Izzo et al. (2022) or Lin & Zrinic (2024) study how the model converges to the closed-formed solution. This could be fixed if accuracy was changed by “performance”. - The paper doesn't discuss how the regret proposed in *Regret minimization with Performative Feedback* (Jaagadeesan et al. 2022) is similar to the mean shift that you propose in your paper. Its citation is also missing. - The paper lacks a discussion of how your equation (1) is similar to the discounted loss that they consider in *Performative Reinforcement Learning* (Mandal et al. 2022). Its citation is also missing. Other Strengths And Weaknesses: - Strengths - The paper identifies and explores the impacts of Performative Prediction with a broader scope that just performance of the models, which is an very interesting direction for the Performative Prediction field. - Weaknesses - The problem studied is too narrow. Would it be possible to apply the metrics to other problems? How would the conclusions be different? Why is the binary random variables problem key for Performative Prediction? - Conclusions are too shallow for the number of theoretical derivations, experiments and plots. (“Our results suggest that the performatively optimal (PRM) path is, in general, biased and introduces a non-zero mean shift”). Isn’t this conclusion expected, as there is a distribution shift? Why are all mathematical derivations relevant? - Relation to the previous literature is not clear enough (See claims and evidence, first point) Other Comments Or Suggestions: none Questions For Authors: See weaknesses. I would be willing to change my evaluation of the paper if the metrics were applied to other problems or if it is better justified why the binary random variables problem is key for Performative Prediction. Furthermore, authors would need to show if these metrics can be used to obtain deeper conclusions. Another direction to improve my evaluation can be to study more realistic case studies where the effects of biased paths can be shown. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the detailed and constructive review! We address your concerns and questions below and remain available for further discussion. **The paper never defines what they understand by PRM … They cite three papers but those solve different optimization problems** Thank you for your comment. The optimization problem (1), whose solution we analyze, in the special case of T = 1 and slow deployment, is exactly the classic problem of finding a performatively-optimal model, introduced in Perdomo et al. The cases of T > 1 and rapid deployment cover extensions discussed in Brown et al. and Ray et al.. By Performative Risk Minimization we simply mean the solution (sequence of models) which minimizes the performative risk. This is exactly the solution to problem (1), which we introduce at the start of Section 3. It is also equivalent to finding a performatively-optimal path. While many works develop methods for finding performatively-stable models instead, this is arguably because of the hardness of finding performatively-optimal points in general settings. Already Perdomo et al. argue (see Section 2 in their ICML version) that minimizing the performative risk is a natural objective in performative prediction. They also show that, under some assumptions, stable and optimal points are “nearby” (their Section 4). **The usefulness of the metrics cannot be fully supported with just the example of binary random variables … The paper seems like an extension of examples used before in papers like Perdomo et al. (2020) or Izzo. et al (2022).”** The studied learning problem is indeed less general than those usually considered in the performative prediction literature. We refer to our response to reviewer 5zXv for an explanation of how our work can serve as a foundation for analyzing more general learning tasks. However, we also study a different mathematical problem, compared to that prior work. Works in performative prediction focus on finding performatively optimal/stable points. In contrast, we study the properties (e.g. bias and shift) of models that minimize the performative risk (assuming that such are already found). This requires a different set of mathematical tools, as existing assumptions (e.g. convexity, smoothness, Lipschitzness w.r.t the Wasserstein distance) from prior work are suited for optimizers searching for optimal/stable points and do not necessarily facilitate an analysis of our problem. We restrict to the analysis of a mean estimation problem for two reasons: - First, to understand the impact of choosing performatively optimal models on the distribution, we need structural assumptions on how the distribution changes as a function of the deployed model. To our awareness, such assumptions have previously only been proposed in specific examples (e.g. those referred to by the reviewer, location families etc). Creating plausible models of how general distributions change in performative settings is indeed an interesting research direction, but orthogonal to our work. - Second, obtaining theoretically-precise results about the impact of PRM was already challenging for binary mean estimation, as is evident by the need to resort to simulations in more advanced cases. We certainly agree that exploring other learning setups in the future is an important research direction, but we expect that this will have to be limited to simulations, due to likely mathematical intractability. While we hope that our work can serve as a useful foundation for such exploration, this is beyond the scope of our theoretical paper. This is also clearly reflected in the paper title and the contributions part of the introduction. **Related work** Thank you for the suggested papers. We will cite and discuss them. We agree that minimizing the performative loss does not always translate to accuracy. We indeed intended to say that other works focus on loss minimization. We will edit the text as suggested and cite the works in the related work section. We will also cite Jaagadeesan et al. 2022 and Mandal et al. 2022, alongside with Brown et al. (2022) and Ray et al. (2022a). All these works indeed motivate our loss (1), whose minimizers we study throughout the paper. We do not see a link between our mean shift metric and the regret of Jaagadeesan et al. Could the reviewer clarify? **Would it be possible to apply the metrics to other problems?** We again refer to our response to reviewer 5zXv for a thorough discussion. **Isn’t this conclusion expected, as there is a distribution shift? Why are all mathematical derivations relevant?** Our theory not only proves the presence of, but also quantifies the shift and bias, for a wide range of problem parameters. The reason we put such a general comment as a summary was that conclusions about the direction and amount of shift and bias are, in general, case-dependent. We have described the more detailed, case-specific conclusions throughout the technical sections. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications about the concepts you use and the problem you are dealing with. While I see how the problem is defined in the beginning of Section 3, I’d suggest to include them before. Clearly stating in the introduction the difference between stable and optimal points and link them to RM and PRM, respectively, would help a lot. Furthermore, improving the clarity of the related work section would also help. Explicitly stating what kind of problems past works have addressed when finding the *optimal* point would give a better context and motivation for the problem (what you say in the related work section of this rebuttal might actually be enough - “We indeed intended to say that other works focus on loss minimization”). Explaining better the impact and restrictions of a mean estimation problem was also helpful to understand why you consider a narrow problem. I think it’d be important to include them in the main version of the text. While I consider *very* useful to initiate the discussions of broader effects of PRM, the focus on bias and mean shift seems somehow straight forward. I have doubts of how direct can all other tools developed by the paper be applied into the simulations that you propose for more advanced settings. Could you provide more explicit details of how your paper can be used as foundation for that future research? The contribution and impact of the paper wouldn’t be as limited if other settings were considered (or at least a more direct and explicit connection was made). Taking into account that all my previous comments could be included in the main text and the limitations discussed, I will raise my score to 2. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for engaging in further discussion! We are glad to hear that our clarifications have been helpful. We thank the reviewer for their suggestions, which we agree will greatly benefit the exposition. As suggested, we will include the above discussions in the next version of the manuscript. In particular, we will clarify the relation between stable and optimal points and link them to RM and PRM early on in the text. We will also clarify the statement in the related work section and expand on the relevance and limitations of the mean estimation setup in the main text. To further explain how our paper can serve as a foundation for future work, we note that the notions we consider in the binary setting have analogs in general learning tasks as well. In particular, we discuss below how the notions of mean shift, bias, and the error-uncertainty trade-off can be generalized to more complex settings. Therefore, the notions we have introduced in our work, together with the general framework of Section 3.1, can serve as a foundation for future work on exploring the impact of PRM. Regarding the **mean shift metric**, we identify two possible extensions, for parametric and non-parametric distributions, respectively. * Consider a setting where the distribution is parametrized by a finite number of parameters, $D_t = D(\mathbf{w}_t)$, where $\mathbf{w}_t = (w^1_t, \dots, w^k_t)$. This is, for example, the case for distributions defined via causal graphical models, as well as for many common distributions (e.g., exponential families). Then, one can define the parameter shift metric as $\mathtt{shift}_t = E(\mathbf{w}_t - \mathbf{w}^0_t)$, where $\mathbf{w}^0_t$ are the parameters of the distribution at time $t$ if the distribution was not affected by the performativity. * In a non-parametric setting, we can instead define a divergence-based metric $\mathtt{shift}_t = E(K(D_t, D^0_t))$, where $K$ is an arbitrary divergence function (e.g., KL-divergence). The function $K$ can be designed to capture an undesirable shift in the distribution, according to the target application. These metrics can then be studied under different models for the performative response and counterfactual dynamics of the distribution in the absence of performativity, which can be chosen depending on the learning task and application under consideration. Regarding the **bias metric**, we provide two extensions suitable for cases where the distribution is divided into several groups, which is relevant, e.g., in fairness-sensitive applications. * First, in a setting with several groups and multi-labeled data, one could calculate **a matrix of biases with one entry for each group and label** defined as follows $\mathtt{bias}^{g,y} = E(E_{(X, Y, G) \sim D^{test}_t}(q^y_t(X) - [Y=y] | G=g))$, where $q_t(X) = (q^1_t(X), \dots, q^{|\mathcal{Y}|}_t(X))$ is the vector of model's softmax probabilities at time $t$ and $G$ is the group. These biases could be interpreted as a measure of unfairness among groups. * Second, one can use established metrics from the literature of **bias amplification** (see [1,2,3,4]). Finally, regarding the **error-uncertainty** trade-off, consider an arbitrary supervised learning task, where the goal is to find predictors $h_t \colon \mathcal{X}\to\mathcal{Y}$ from a set of possible models $\mathcal{H}$ (e.g. linear models or neural networks of a certain architecture). Suppose that we deploy a model $h_{t-1}$, which induces a distribution $D_t$. Then, the performative loss of $h_{t-1}$ in the slow deployment is $E_{(X, Y) \sim D_t}(l(h_{t-1}(X), Y)) = (E_{(X, Y) \sim D_t}(l(h_{t-1}(X), Y)) - \min_{h’ \in \mathcal{H}} E_{(X, Y) \sim D_t}(l(h'(X), Y))) +\min_{h’ \in \mathcal{H}} E_{(X, Y) \sim D_t}(l(h'(X), Y))$ The first term measures how far from optimal the model is on its own distribution. The second term corresponds to the simplicity of the induced distribution, as measured by how well the model class can fit it. We thus observe an error-simplicity trade-off, akin to the trade-off observed in Lemma 3.2. Moreover, as discussed in Section 3, the decomposition in Lemma 3.2 holds for a large class of distributions [5]. We will include the discussion of these extensions in the next version of the manuscript. **References** [1] Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457, 2017. [2] Angelina Wang and Olga Russakovsky. Directional bias amplification. ICML, 2021. [3] Dora Zhao, Jerone Andrews, and Alice Xiang. Men also do laundry: Multi-attribute bias amplification. ICML, 2023. [4] Bhanu Tokas, Rahul Nair, Hannah Kerner. Making Bias Amplification in Balanced Datasets Directional and Interpretable. arXiv:2412.11060, 2024. [5] Gupta, N., Smith, J., Adlam, B., and Mariet, Z. E. Ensembles of Classifiers: a Bias-Variance Perspective. TMLR, 2022.
Summary: The paper studies settings where predictions actively shape the data distribution, a formalism known as performative prediction. Beyond just predictive accuracy, their main focus is on understanding how different update procedures and learning algorithms influence the data distribution and understanding to what extent do performatively optimal predictors bias the distributions towards extreme values. They study these questions in an outcome performative (predictions only influence outcomes not features) binary prediction setting. They further impose a linear model of how predictions shape the conditional distribution over outcomes. Their main results exactly characterize essentially all aspects of this binary problem: what is the performative optimal solution, what is the optimal loss, what is the bias between predictions and outcomes in the performatively optimal case, and how do predictions look like if one performs naive updates. They do this for both a stateless and stateful versions of their model. Claims And Evidence: Yes, all the claims in the submissions are supported by formal results. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not check the proofs in detail. Experimental Designs Or Analyses: The experimental designs are just visualizations of their theoretical results. These are valid and sound. Supplementary Material: I did not. Relation To Broader Scientific Literature: The paper contributes to the emerging area of performative prediction. Relative to other papers in this area, they focus on a very specific setup and aim to understand the long term impacts of predictions on the distribution instead of designing algorithms to exactly solve for optimal solutions. Some of their main conceptual takeaways related to their finding that the performatively optimal solution is biased were known, at least in limited regimes. See for instance the biased coin toss example in the initial paper in the area (Perdomo et al 2020). However, they illustrate these ideas in a much richer setting. Essential References Not Discussed: I think that the paper would benefit by including a discussion to the following papers. Kim & Perdomo (ITCS 2023). This paper studies a very similar setup where a learner optimizes an outcome performative binary outcome and designs learning algorithms that can both forecast outcomes accurately as well as steer them towards particular targets. (e.g. optimize the likelihood that y=1). Given the similarities in setup and focus on the broader impacts of prediction on the data beyond just accuracy, I think the paper would be improved by including it in the related work. Similarly, Miller et al ICML 2021 was one of the first papers to think about algorithms for performative optimality. Outside of performative prediction, their dynamics model is nearly identical to the dynamics considered in Golowich et al Neurips 2024 in their paper on online control of population dynamics. The focus of all of these papers is different to that considered here, but very much complementary. The reader would benefit from knowing how this work relates to the broader literature. Other Strengths And Weaknesses: The strengths of the paper is that it provides a lot of insight and a comprehensive analysis of the problem they study. The main weakness is that their setup is perhaps overly simplistic. They think about a scalar optimization problem over binary outcomes where the dynamics are highly constrained. It is not clear how these insights apply to other settings. Other Comments Or Suggestions: The estimators paragraph at the bottom of pg 5 has a typo “extend” ---> “extent”. Also it references Theorem 4.1 when it should be Proposition 4.1? Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful and constructive review! We address your concerns and questions below and remain available for further discussion. **I think that the paper would benefit by including a discussion to the following papers.** Thank you for the suggested references! We agree that, while orthogonal to our work, they are certainly relevant. We will cite and discuss them in the next version of our manuscript. Indeed, Kim and Perdomo (2023) investigate how to steer distributions towards a more desirable outcome by using omnipredictors. While steering is indeed different from accuracy maximization, the focus of this and our work are different. We analyze the secondary effects of performative risk minimization on the distribution and outcomes, which were not intended by the model provider. In contrast, Kim and Perdomo (2023) assume that the model provider intentionally changes the distribution according to their metric, which can be expressed in the omniprediction framework. Hence, the works could be seen a complementary. Our work allows the model provider to discover and analyze the unintended distribution changes, while the omniprediction could potentially allow the model provider to encode penalties for these changes, if they could be expressed in the omniprediction framework. Similarly, Golowich et al. (2024) focuses on distribution steering. However, their results do not inform the model provider about the unintended distribution changes due to the performative risk minimizaiton. As for Miller et al. (2021), we will make sure to add this paper in the existing discussion of learning algorithms in the performative setting. **The main weakness is that their setup is perhaps overly simplistic.** Our work indeed analyzes the specific learning problem of binary mean estimation (as clearly acknowledged in the title). Already on this classic learning problem, our work points towards potential shortcomings/side-effects of performative risk minimization. Therefore, while do not make specific claims for other settings, we hope that our work will encourage further exploration of the impact of performative methods. While the technical results are specific to the considered setup, we hope that our analysis can be a useful starting point for future exploration of other learning tasks for several reasons. First, while we only analyze the linear response model, we note that the linear response naturally arises as a first-order Taylor approximation for any performative response. Thus, we believe that our results will (at least qualitatively) transfer to **situations of weak performative response** (i.e., small $|\alpha|$). Second, as we noted in the discussion of Lemma 3.2, **the error-uncertainty decomposition holds for a broad class of distributions**, not only the Bernoulli distribution. Thus, we can expect PRM to prefer distributions with less aleatoric uncertainty, which are usually more degenerate. For example, in the case of the multinomial distribution, the model provider would try to concentrate the probability mass on a small subset of outcomes. Third, our results can easily be extended to the following **more general group setting**. Imagine that clients consist of several independent groups, and each group reacts to the predictions of the model in the same way as the whole distribution in our paper. Also, assume that the model provider additionally gets some covariates that determine group membership before making a prediction. This modification makes our problem much closer to the usual supervised learning tasks where the model provider needs to simultaneously learn a model for membership prediction and outcomes for each group. At the same time, our results in the perfect information setting can be directly applied to this setup by independently applying the previous analysis to each group. The main limitation of such an extension is the assumption that groups evolve independently. This assumption could hold in the setting of drug efficacy prediction, but it will probably not hold in traffic prediction. **Typos** Thank you! We will fix these typos in the next version of our manuscript.
Summary: This paper studied the long-term impact of performative prediction on the predictor quality (as measured by bias) and population distribution (as measured by shift). It examined separately the settings of slow and rapid classifier updates with or without perfect information. The paper analyzed the long-term bias and shift in the context of binary mean estimation and compare and contrast them with the naive setting. The generalizability is implied with RL simulations. ## update after rebuttal Thank the authors for providing detailed responses to my questions. I think the strength of this paper generally outweighs its weakness, so I would consider raising the recommendation. Claims And Evidence: All claims are supported by either theoretical or experimental evidences. Methods And Evaluation Criteria: The method makes sense and the theoretical results are instructional. Evaluatoin metrics: bias and shifts make sense and the benchmark being the naive path (where the performativity is absent) is a reasonable baseline to study the impact of performativity. Theoretical Claims: I checked correctness of proofs in section 4 and Theorem 5.1, where no issues were spotted. Experimental Designs Or Analyses: RL simulations validate the theoretical findings with information restriction. The simulated curves converge to the theoretical prediction. Supplementary Material: I went throught the RL simulation details and the algorithms are correct and reasonable. Questions: 1. Setting $\lambda=0$ seems too restrictive, no? The population's self-evolution and how strong its self adaptation influence the effect of performativity seems also very interesting. 2. Can you provide more details on how to obtain the optimal $\theta_t$ from the Bellman equation in the Greedy Exploration algorithm (line 1295 in Algorithm 2, page 24), since $V$ in that component contains an infinite sum? Relation To Broader Scientific Literature: The work builds on performative prediction and long-term fairness, the combination of which is an emerging study in the literature. It contributes a quantifiable framework for PRM’s societal impact. The limitation lies in that focusing on binary variables restricts connections to broader performativity literature. Essential References Not Discussed: Not recognized. Other Strengths And Weaknesses: This paper focuses on a novel problem and is the few work looking at the long-term societal impact of performative prediction. Although the methodology may be too simplified to capture practical situations, the findings on the classifier bias and distribution shift in the long run are insightful. Other Comments Or Suggestions: 1. Right column on page 5: the second to last line contains "Theorem 4.1", does it mean "Proposition 4.1"? 2. Last sentence of the caption of Figure 5: it seems both top and bottom rows correspond to the slow case. Questions For Authors: To complement the findings, would the case where the classifier is myopic, i.e., optimize instantaneous objectives at every step instead of solving for a long-term policy, be different from the model considered in the paper? If so, in what way the current model is more advantageous; and if not, why would you or the society prefer one instead of the other? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the thoughtful and constructive review! We address your concerns and questions below and remain available for further discussion. **Setting $\lambda = 0$ seems too restrictive, no? ...** Thank you for your comment. Indeed, in Section 4.2.2 we had to make assumptions in order to ensure that we have one optimal value of the mean in the second period, $p_1$. First, we need to assume that the mean in the first period is also fixed, $p_0 = const$. Secondly, to make our setting estimable, we need to have some restrictions on either $\pi$ or $\lambda$, such as $\pi = 0$, $\pi=p_0$, or $\lambda = 0$ (otherwise, $p_0$ would not be distinguishable from $\pi$). Given that the purpose of experiments was to see how exploration interferes with performative risk minimization, we chose $\lambda = 0$ since this case is harder in terms of exploration compared to $\pi=0$ or $\pi=p_0$. Notice, however, that one can observe the effects of self-evolution in our experiments in Section 5.2. There we use $\lambda \neq 0$. When $T \to \infty$, the mean evolves over a long period of time and, for small values of $|\alpha|$, converges to an equilibrium value. Some trends we observe there as $T \to \infty$ are as follows: - The equilibrium mean, $p^*$, goes closer to the extreme values with an increase in $\alpha$, when $\alpha > 0$. - For the slow deployment case and $\alpha < 0$, $p^*$ goes closer to zero with an increase in $|\alpha|$. - For the rapid deployment case and $\alpha < 0$, for small values of $|\alpha|$ the mean also goes closer to zero. However, for big values of $|\alpha|$ it starts to oscillate. **Can you provide more details on how to obtain the optimal $\theta_t$ ...** Thank you for raising this point, we will clarify this in the next version of the manuscript. We find the value function, $V$, using the value iteration algorithm. Due to the Blackwell's contraction mapping theorem and the boundedness of the loss function, this process converges. Then, we solve the optimization problem for $\theta_t$ directly. **To complement the findings, would the case where the classifier is myopic ...** The naive path exactly serves this purpose. The naive path corresponds to a situation when the classifier is myopic, and the performativity is present, while the bias and mean shift compare the PRM-induced distribution path when the performativity is present with the distribution path when the performativity is absent. **If so, in what way the current model is more advantageous ...** Our results generally indicate that the naive (myopic) path induces less bias and shift in the positive feedback case. Thus, if one is concerned about the impact of performativity (see Section 6) and could sacrifice accuracy, the naive (myopic) classifier could be preferable to the PRM one. The negative feedback case is more complicated. When the performativity is weak ($|\alpha|$ is small), the naive path introduces a bigger mean shift. However, when the performativity is big, in the infinite horizon case, the naive path is more stable (e.g., it will not have such behavior as depicted by Figure 5, bottom row, fourth plot). **Typos** Thank you! We will fix these typos in the next version of our manuscript.
null
null
null
null
null
null
null
null
Principal-Agent Bandit Games with Self-Interested and Exploratory Learning Agents
Accept (poster)
Summary: The paper studies a principal-agent bandit game where the principal first provides an incentive, and then the agent selects the arm based on estimation and the provided incentive. The authors propose a novel elimination algorithm for the i.i.d. setting and the linear bandit setting. The corresponding regret upper bounds are $\sqrt{T}$ and $T^{2/3}$. The algorithm's regret outperforms that of Dogan et al 2023. Claims And Evidence: The paper proposes elimination algorithms with detailed analysis on the regret upper bounds. From my level of understanding, I do not find big issues in the analysis. Methods And Evaluation Criteria: Table 1 provides detailed regret comparison for different algorithms along with underlying assumptions, which makes a convincing point that the proposed algorithms outperform previous algorithms. Theoretical Claims: I have not checked correctness of all proofs. Experimental Designs Or Analyses: The paper does not have experiments. Supplementary Material: I checked Appendix B for fair comparisons of regrets. Relation To Broader Scientific Literature: The principal-agent bandit game is well motivated in the introduction with a good example. Previous papers on this topic usually assume that agents have access to true means, while this paper along with Dogan et al. 2023 bridge the gap without the assumption. Essential References Not Discussed: I'm unfamiliar with principal-agent bandit games so I'm unsure whether essential references are missed. Other Strengths And Weaknesses: Strengths: The proposed elimination framework with searching is novel and balances a good tradeoff between exploration and exploitation in the problem. Compared with Dogan et al. 2023, the algorithm outperforms in terms of regret upper bound. Weakness: I understand that the paper is of theoretical nature, but I believe that the paper can benefit from experiments on simulated data and real-world data. Other Comments Or Suggestions: I do not have other comments. Questions For Authors: I'm confused of the annotation in Table 1 "worst-case bound......enjoys a gap-dependent bound". What are the gap-dependent bounds? Are they logarithmic of $T$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. Below we address your concerns. --- **W:** I understand that the paper is of theoretical nature, but I believe that the paper can benefit from experiments on simulated data and real-world data. **A:** Thank you for the suggestion. We do agree that experiments could provide additional value. Please see our response to Reviewer RBF5 (W3 and A3) --- **Q:** I'm confused of the annotation in Table 1 "worst-case bound......enjoys a gap-dependent bound". What are the gap-dependent bounds? Are they logarithmic of $T$ ? **A:** The gap-dependent bound we refer to is of the form $O ( \sum_{a \in [K]: \Delta_a>0}\frac{\log T}{\Delta_a})$ where $\Delta_a := \max_{b \in [K]} \\{\theta_b + \mu_b\\} - (\theta_a + \mu_a)$ measures the suboptimality gap of arm $a$ in the oracle-agent setup. As noted in the footnote of Table 1, Scheid et al. (2024b) obtain a similar gap-dependent regret bound, but only under the oracle-agent model. In contrast, for the more general learning agent setting, whether such a bound can be achieved remains an open question (see the discussion below Theorem 3.4). Intuitively, we are pessimistic about this possibility, as it appears necessary for the algorithm to continue sampling suboptimal arms in order to stabilize the agent’s internal estimates and reduce the estimation error of the optimal incentive (see Section 3.3 for a more detailed discussion).
Summary: This paper studies the bandit principal-agent problem, where a principal tries to incentivize an agent playing a bandit instance so as to maximize their own cumulative reward. It extends the previous works of Scheid 2023 and Dogan 2023 by considering an agent who selects the arm based on *empirical* reward means. It covers both the i.i.d reward case and the linear reward case, with agents either self-interested (that is, greedy) or exploratory. ## update after rebuttal I thank the authors for having clarified the validity of definition 2.1. I think the revised version of the article should feature the general version of definition 2.1 with the constants (a,b) (as explained by the authors in their reply) rather than just the special case (a,b) = (1/2, 1/2); as well as the discussion about the algorithms satisfying def 2.1. I keep my score unchanged because I still think def 2.1 is slightly restrictive, but I'm convinced by the overall quality of the work. Claims And Evidence: The claims in the paper are appropriate, clear and supported by strong theoretical guarantees. The rates in the regret bound are consistent with existing literature, while derived under a less stringent assumption regarding the agent's behavior. Methods And Evaluation Criteria: There is not experiment supporting the results. This is not a problem to me, as the theoretical results seem strong enough. Theoretical Claims: I did went the correctness of the proofs in the appendix. Experimental Designs Or Analyses: NA Supplementary Material: I read the appendices A, B, C and D. I did not check the proofs. Relation To Broader Scientific Literature: I find the discussion about the existing scientific literature very satisfactory. While the number of references is relatively low (I counted 32), this is understandable given that that principal-agent bandit games are a fairly new topic of research. The authors made a clear efffort to compare their results and assumptions with the two closest papers, namely Scheid 2023 and Dogan 2023. I particularly appreciated table 1, which gives a clear overview of these differences. Essential References Not Discussed: I do not think of any essential reference not discussed by the authors. Other Strengths And Weaknesses: STRENGHTS: - The results are interesting. The authors manage to match the existing bounds on regret (or with a slightly degraded rate, such as in the contextual case) while working under a less restrictive assumption for the agent. - The paper is precise and well written. I particularly appreciated the effort to intuitively explain the steps of the algorithms, as well as the additional discussions in the appendix. WEAKNESSES: - While considering an agent that compares empirical means rather than theoretical means is an improvement over Dogan (2023) and Scheid (2023), it seems that this paper employs many technics from these papers without bringing a lot novelties from a mathematical point of view (except using online elimination and MSP). - The pseucode of algorithm 1 is a bit hard to parse. Other Comments Or Suggestions: - The definition of pi^0 should be introduced earlier in the paper, as it appears in the pseudo code of algorithm 1 without having been defined beforehand. - The pseudo code of algorithm 1 could be clearer. For instance, in each phase m, it is not clear that the first Z_m rounds are dedicated to stabilizing estimators, then next T_m ones to determining optimal incentivizes: there could be an ambiguity about whether these rounds overlap. Questions For Authors: - Is assumption in definition 2.1 satisfied by classic bandit routines, such as UCB or Thompson sampling? - It is not clear to me why considering a vector of incentives rather than just a scalar incentive on one arm at each round (just as in Scheid 2023) is better. What are the advantages of this approach, given that the agent only pulls one arm at a time? - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. Below we address your concerns. --- **W1:** It seems that this paper employs many technics from these papers (Dogan et al., 2023a) and (Scheid et al., 2024b) without bringing a lot novelties from a mathematical point of view (except using online elimination and MSP). **A1:** Here, we clarify that our techniques and analysis are **significantly different** from those in Scheid et al. (2024b), and **entirely distinct** from those in Dogan et al. (2023a). - **Comparison with Scheid et al. (2024b).** While Scheid et al. (2024b) use a standard binary search in an oracle-agent setting, we propose a noise-robust variant (Algorithm 3) specifically designed for the more challenging learning-agent setting. This is a non-trivial extension, as optimal incentives in our setting are time-varying—they depend on the agent’s evolving internal estimates. The search procedure must therefore be robust to fluctuations introduced by the agent’s learning process. Algorithm 3 is carefully designed to address these issues and is supported by a detailed analysis (see Lemma 3.1). We also introduce a novel elimination framework tailored to learning agents. Unlike traditional methods (e.g., Even-Dar et al., 2006), our algorithm does not permanently eliminate suboptimal arms but continues to play them moderately to stabilize the agent’s estimates. This is essential to avoid linear regret, as discussed in Section 3.3. In contrast, Scheid et al. (2024b) do not face these challenges, as agent responses in the oracle model are fixed for a given incentive. - **Comparison with Dogan et al. (2023a).** Our algorithms (Algorithms 5 and 6 in Appendices F and G) are fundamentally different from those of Dogan et al. (2023a). Their approach relies on an $\epsilon$-greedy strategy and explicitly estimates the agent’s model by solving an optimization problem at every round. In contrast, our approach is built around a **newly developed, novel** online elimination framework that **does not require** estimating the agent’s model. As noted in our response to Reviewer RBF5 (see W3 and A3), such estimation would incur significant computational and memory costs. A key innovation is our probabilistic amplification technique (see Section 4.2, also Algorithms 5 and 6), which ensures robustness to the randomness induced by the agent’s exploratory behavior. This technique supports two central components of our algorithm: **incentive testing** and **trustworthy online elimination**. Specifically, incentive testing refers to the process of identifying reliable incentives in the presence of agent learning, while trustworthy elimination ensures that, with high probability, the elimination process is not disrupted by exploratory actions of the agent. We believe that the techniques developed around probabilistic amplification offer a robust and flexible foundation for principal-agent interactions with learning agents, and may inspire further exploration in related dynamic settings. --- **W2:** Presentation of Algorithm 1 and introducing $\pi^0$ earlier. **A2:** Thanks for your suggestions. We will make sure to polish up our presentation according to your suggestions in the next version. --- **Q1:** Is assumption in definition 2.1 satisfied by classic bandit routines, such as UCB or Thompson sampling? **A:** Thank you for the insightful question. UCB does not satisfy Definition 2.1, as it continues to explore suboptimal arms indefinitely rather than decaying exploration over time. Whether Thompson Sampling satisfies the definition is unclear and may warrant further study. Since our focus is on designing principal algorithms for agents satisfying Definition 2.1, characterizing which other bandit algorithms meet this condition is beyond the scope of this work. --- **Q2:** Unclear why considering a vector of incentives rather than just a scalar incentive on one arm at each round (just as in Scheid 2024b) is better. What are the advantages of this approach, given that the agent only pulls one arm at a time? **A:** Incentivizing multiple arms is critical for our online elimination process as detailed in Section 3.4 (and its robust version for exploratory agent). Specifically, when conducting elimination at round $t$, the principal needs to compare $\hat{\theta}_a(t)+\hat{\mu}_a(t)+\epsilon$ against the empirical maximum $\max_b \hat{\theta}_b(t)+\hat{\mu}_b(t)$ over all active arm $a$, where $\epsilon>0$ is an error specified by the algorithm, $\hat{\theta}_a(t)$ is the principal's estimate and $\hat{\mu}_a(t)$ is the agent's estimate. Since the agent's estimates are unknown to the principal, the principal is unaware of which active arm achieves empirical maximum. To address this, the algorithm incentivizes every active arm $a$ by $\hat{\theta}_a(t)$ at round $t$. By doing this, the algorithm indirectly compares $\hat{\theta}_a(t)+\hat{\mu}_a(t)+\epsilon$ against empirical maximum by observing the played arm. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response and appreciate the clarifications regarding the technical novelties. However, I find it quite concerning that the authors are unable to present a classic bandit algorithm that satisfies Definition 2.1. This represents a significant limitation of their study from a practical perspective. Given the overall quality of the work, I am maintaining my recommendation; however, this issue should be explicitly addressed, or at least discussed, in the revised version. --- Reply to Comment 1.1.1: Comment: **Q:** However, I find it quite concerning that the authors are unable to present a classic bandit algorithm that satisfies Definition 2.1. This represents a significant limitation of their study from a practical perspective. **A:** Thank you for the thoughtful comment. We would like to clarify two key points in the following response. **$\epsilon$-greedy type algorithms satisfy definition 2.1.** In the following, we show $\epsilon$-greedy type algorithms satisfy definition 2.1. For clarity, let's restate def 2.1 as follows. **Definition 2.1.** Let the probability that agent explores at round t be $p_t=\mathbb{P}(A_t\not\in \arg\max_{a\in \mathcal{A}}\\{\hat{\mu}_a(t)+\pi_a(t)\\})$. There exists a absolute constant $c_0 \geq 0$ such that $p_t \leq c_0 \sqrt{\log(2t)/t}$ at any time step $t \in [\tau,T]$ where $\tau \geq 2$ is the minimum integer satisfying $c_0 \sqrt{\log(2\tau)}<\sqrt{\tau}$. We point out that $\epsilon$-greedy type algorithms with decaying exploration rates $\epsilon_t=O(t^{-1/2})$ (e.g., Algorithm 2 of Dogan et al., (2023a) uses $\epsilon_t =O(t^{-1/2})$ for learning agent) satisfy this condition directly. While some variants (e.g., Alg 1.2, (Sliv19) choose $\epsilon_t =O( t^{-1/3})$) do not meet this exactly, they fall under a mild generalization of def 2.1 by assuming $p_t \leq c_0 \log^b(2t) t^{-a}$ where $a,b \in (0,1)$ are two absolute constant. This results in an additional regret in the order of $O( \log^b(T) T^{1-a})$. **Practical Perspective from Human Behavior.** Definition 2.1 generalizes agent behavior models studied in prior principal-agent bandit work (e.g., Dogan et al., 2023a; Scheid et al., 2024b) by allowing agents to both learn their preferences and occasionally explore. In behavioral and cognitive science (e.g., [1–6]), the $\epsilon$-greedy model has been widely used to capture how humans make sequential decisions under uncertainty. While $\epsilon$-greedy is not always the most accurate model of human behavior, its simplicity and interpretability make it a valuable reference point. This further motivates our use of Definition 2.1 as a meaningful step toward modeling realistic learning agents. We will include this clarification and broader motivation in the revision. References: [Sliv19] Slivkins, Introduction to Multi-Armed Bandits, 2019. [1] Barron, G., Erev, I. Small feedback-based decisions and their limited correspondence to description-based decisions, Journal of behavioral decision making, 2003. [2] Kalidindi, K., Bowman, H. Using $\epsilon$-greedy reinforcement learning methods to further understand ventromedial prefrontal patients’ deficits on the Iowa Gambling Task, Neural Networks, 2007 [3] Gershman, S., Deconstructing the human algorithms for exploration, Cognition, 2018 [4] Lee, Michael D., et al. "Psychological models of human and optimal performance in bandit problems." Cognitive systems research 12.2 (2011): 164-174. [5] Daw, Nathaniel D., et al. "Cortical substrates for exploratory decisions in humans." Nature 441.7095 (2006): 876-879. [6] Cohen, Jonathan D., Samuel M. McClure, and Angela J. Yu. "Should I stay or should I go? How the human brain manages the trade-off between exploitation and exploration." Philosophical Transactions of the Royal Society B: Biological Sciences 362.1481 (2007): 933-942.
Summary: This paper studies the problem of principal-agent interactions with self-interested agents. Different from previous studies like Dogan et al.(2023a, 2023b) and Scheid et al. (2024b), this paper assumes an empirical mean maximizer agent behavior model rather than true mean maximizer. The authors’ elimination framework and search algorithms are novel, efficiently handling uncertainty from greedy agent learning with iid rewards while achieving strong regret bounds. The authors also extend their algorithm to exploratory agents and linear rewards and show strong regret bounds under those settings as well. Claims And Evidence: Yes. Methods And Evaluation Criteria: n/a. Theoretical Claims: I did not check the correctness of proofs. Experimental Designs Or Analyses: n/a. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: This paper is related to the literature of principal-agent bandit games. Essential References Not Discussed: I did not notice any essential reference not being discussed. Other Strengths And Weaknesses: On the positive side, this paper considers a learning approach to learning the near-optimal/no-regret principal policy in principal-agent bandit games. I found the problem presented by the paper interesting, and relevant. In general, this paper proposed an innovative research topic. On the other hand, I think the outperformance of soft-O(T^{11/12}) is not a fair comparison, since their regret is defined with respect to different behavior models, i.e., true mean versus empirical mean. If we compare regret under different models, then it seems the proposed Algorithm 7 performs worse than C-IPA Scheid et al. (2024b) under the linear reward model. What is more, besides the unfair comparison to prior work (e.g., Dogan et al., 2023a) due to different agent behavior model, the "significantly improve" claim remains unsubstantiated in practice without empirical comparisons. This paper is missing comparable algorithms to validate its performance, I wonder maybe if some naive heuristic strategies can perform well in practice. Even simple synthetic experiments would strengthen its claims, and provide more insights for the algorithm's performance improvement. Other Comments Or Suggestions: n/a. Questions For Authors: n/a. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. Below we address your concerns. --- **W1:** I think the outperformance of soft $O(T^{11/12})$ is not a fair comparison, since their regret is defined with respect to different behavior models, i.e., true mean versus empirical mean. **A1:** We would like to clarify that the comparison with the $O(T^{11/12})$ regret bound (Dogan et al., 2023a), as well as others in Table 1, **is indeed fair** because they can all be evaluated under the same behavior model and regret definition. Specifically, our behavior model generalizes theirs: by letting the agent always receive the constant reward equal to the true mean (as noted in Remark 2.2), our model reduces to theirs. In this case, the empirical mean becomes the true mean, ensuring that the regret definitions and behavior models align. Thus, our comparisons are fair. --- **W2:** If we compare regret under different models, then it seems the proposed Algorithm 7 performs worse than C-IPA Scheid et al. (2024b) under the linear reward model. **A2:** As mentioned in A1, regret bounds can be fairly compared when the models and regret definitions align, particularly when one model strictly generalizes another. Our setting is strictly more general than that of Scheid et al. (2024b), as their oracle-agent model is a special case of our learning agent framework. While our regret bound is indeed worse when restricted to this special case, our algorithm is designed to work in the broader and more challenging learning agent setting which is not addressed in Scheid et al. (2024b). --- **W3:** The "significantly improve" claim remains unsubstantiated in practice without empirical comparisons. **A3:** We thank the reviewer for this suggestion. Our primary focus in this work is on theoretical improvements in the order of the regret bound (improving from $O(T^{11/12})$ to $O(\sqrt{T})$ in exploratory oracle-agent setup). As such, this version does not include experiments. We do agree that empirical evaluation is a valuable direction and plan to pursue it in future work. Nonetheless, we would like to highlight several practical advantages of our algorithm compared to Dogan et al. (2023a), particularly in terms of implementation: - **Computational efficiency.** Their algorithm solves a linear program at each round, with the number of constraints growing linearly in $t$, leading to increasing runtime. In contrast, our Algorithms 5 and 6 only require simple arithmetic operations per round, resulting in much greater efficiency. - **Memory usage.** Their method stores all past incentives and actions, resulting in memory usage that scales with $t$. Our algorithms avoid this by maintaining only a few counters or estimates, ensuring constant or sublinear memory usage. - **Hyperparameter tuning.** Their algorithm requires two hyperparameters that must be carefully tuned. Our algorithms, by contrast, only require a standard confidence parameter $\delta \in (0,1)$, which typically does not require fine-tuning in practice.
null
null
null
null
null
null
null
null
Geometric Algebra Planes: Convex Implicit Neural Volumes
Accept (poster)
Summary: The paper aims to improve grid-based representations for neural fields. Inspired by principles from geometric algebra, the paper introduces a set of formulations, specifically, convex and semi-convex representations, to enhance the expressiveness and efficiency of grid-based neural fields. Experimental results on several tasks, including 3D novel view synthesis, segmentation, and video segmentation, demonstrate that the proposed method can improve the efficiency of the representations. ## update after rebuttal No major concerns were raised, and most of the concerns were resolved during rebuttal. Thus, I would like to keep my initial score. Claims And Evidence: The paper claims that incorporating geometric algebra concepts into grid-based neural field representations can enhance expressibility. To support this, the authors provide experimental results on multiple tasks. Methods And Evaluation Criteria: Although the idea of combining lines, planes, and cubes for 3D representations is not new in this domain, the paper introduces intriguing idea of connecting them to Geometry algebra and convexity. However, the transition to the introduction of convexity seems abrupt. In addition, it is not entirely clear whether the choice among these formulations is task-dependent or if there is a general guideline for their application. Theoretical Claims: Theoretical claims seem valid. The analysis, based on the 2D setting, is extended to 3D tasks through experiments. Experimental Designs Or Analyses: The experimental designs appear to be standard. Supplementary Material: I reviewed supplementary materials, focusing on implementation details. Relation To Broader Scientific Literature: This work is closely related to prior research on efficient grid-based representations, such as TensoRF, K-Planes, and MERF. The proposed method could provide interesting direction to convexity and other untouched aspects in this domain. Essential References Not Discussed: The paper cites all essential references. Other Strengths And Weaknesses: No comments Other Comments Or Suggestions: In Section 3.1, the paper states, "The specific models we use for nonconvex, semiconvex, and convex optimization are illustrated in Figure 1", yet only the nonconvex model is shown. Questions For Authors: Q1: Is there a specific reason for including the overviews of the three architectures in the supplementary rather than in the main paper? Q2: In Table 2, how is the performance of Tri-Planes model with concatenation? Q3: In 2D representation, novel view synthesis, and novel view object segmentation, non-convex representations seem to perform best, while semi-convex seem to perform best for video segmentation tasks. Could you provide more detailed guidelines or discussion on how one should select among nonconvex, semi-convex, and convex representations for a given task? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful engagement with our work. Response to other comments and suggestions: Thank you for noticing the inconsistency with figure 1. This is a typo, the full figure is in the appendix (figure 6) due to space limitations in the main text. Response to questions: Q1 (full architecture figure in appendix): We did not have enough space. Q2 (performance of Tri-Planes with concatenation): The performance is very similar to original Tri-Planes (with feature addition). For instance, in the 3D segmentation with 2D supervision task (task 1), Tri-Planes with concatenation using the same channel dimensions and resolutions (leading to slightly more parameters as the input feature size to the MLP decoder is increased) achieves the following IOU results: 0.691 (convex), 0.870 (semiconvex), 0.868 (nonconvex). The results achieved by addition of the features were: 0.681 (convex), 0.868 (semiconvex), 0.863 (nonconvex). Since the difference is very minor, we present Tri-Planes in its original form. Q3 (guidelines for when to use convex, semi-convex, vs. nonconvex GA-Planes): The radiance field reconstruction task is inherently nonconvex due to the nonlinear forward model. Thus, using a semiconvex or convex model does not provide any theoretical or practical advantages. For the other 2 tasks, convexity and semi-convexity improve stability with respect to random initialization and training procedure. However, empirically we find that nonconvex models also work well once model size is large enough; convexity is most essential for small models that are most sensitive to random initialization. Convex models can also be trained very quickly with specialized convex solvers (a future direction for our project). Semiconvex models don’t have fast, dedicated solvers like convex models, but they also achieve the global minimum of the objective and are thus more stable compared to nonconvex models with similar performance.
Summary: The paper reviews existing literature on INRs, noticing that each method presents a trade off between its representation power and its size and optimizability. Based on Clifford algebra, Geometric Algebra Planes are introduced, generalizing some of the existing approaches which use 2 or 3-dimensional feature grids. GA uses a combination of grids at different dimensions and resolutions, and using different combination methods (e.g. multiplication and/or concatenation), resulting in convex, semi-convex and non-convex models. A theoretical derivation describes the behaviour and bounds of the proposed approach, also providing practically useful insights. Experiments with neural radiance fields and segmentation are shown to validate the method. #Update after rebuttal The rebuttal addressed most of my concerns. I am still not convinced of the validity of the video segmentation experiment, but overall I lean towards acceptance. Claims And Evidence: The claims are mostly supported by evidence. Experimental evidence could be improved (see below). Methods And Evaluation Criteria: The proposed method appears appropriate for the described problem. Theoretical Claims: I am familiar with the described problem, but not an expert in the theory behind it, therefore I am unable to verify the proofs for the theoretical claims. Experimental Designs Or Analyses: The experiment on NeRF is valid, and shows a comparison with the most important baselines, showing good performance across multiple model sizes. E1) The 3D segmentation experiment also seem valid and show that GA-planes supersedes Tri-planes. However, to fully convince the reader, other baselines should be included as well. Examples can be found in [1]. While achieving state of the art might not be the goal of the work, showing that the GA-planes can improve existing works seems to be beneficial. Can GA-planes be used in conjunction with existing models used as (probably non-convex) decoders? E2) The video segmentation task is unclear: it shows that GA-planes is better than tri-planes, however I do not understand the significance of a segmentation task where the ground truth comes from an existing segmentation model. E3) Experiments on shape/image representation could also show the benefit of the method, and would have been an important addition to the paper to show the general validity of the method. [1] Deep Learning Based 3D Segmentation: A Survey, Yong He, Hongshan Yu, Xiaoyan Liu, Zhengeng Yang, Wei Sun, Saeed Anwar, Ajmal Mian Supplementary Material: Yes. A1 and A3-7 Relation To Broader Scientific Literature: The contributions seem sufficiently contextualised, both in their theoretical framework and among recent baselines, except on segmentation (see concerns above). Essential References Not Discussed: Potential additional works needed for the segmentation experiments (see above). A related works section on Clifford algebra-related methods should also be included [2] [3] [4], given the claim of being "the first use of GA in neural volume models" [2] Clifford Neural Layers for PDE Modeling, Johannes Brandstetter, Rianne van den Berg, Max Welling, Jayesh K. Gupta [3] Clifford Group Equivariant Neural Networks, Ruhe et al. [4] Geometric Clifford Algebra Networks, Ruhe et al. Other Strengths And Weaknesses: Strengths: S1) The paper is well written S2) The analyses are well made and offer good insights with also practical applications, as in section 4.2. Figure 3 is also very informative, including multiple methods at different sizes. S3) The method seems well grounded and generalises common approaches such as Tri-planes Weaknesses: W1) The experimental weaknesses raised above concern me about the practical usability of the method, so I would like the authors to address those concerns. W2) Does the method come with a speed and/or memory overhead? Given the claims of performance/memory optimality, this should be addressed. Other Comments Or Suggestions: Figure 1 is referenced in the text as containing 3 rows but it only has one. The text is referring to the supplementary, I imagine. Having a full (but smaller) figure in the main text would be beneficial. Questions For Authors: I would like the authors to address my concerns about experimental design and memory/speed overhead. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments and thoughtful engagement with our work. We address your concerns below. Response to E1 (other baselines and nonconvex decoders): Thank you for referencing [1]. In our 3D segmentation experiments, we use a different dataset compared to data types mentioned by the survey paper. The task we consider is to lift the 2D segmented images of a particular scene to get the 3D segmentation of the object, which is very similar to the normal radiance field reconstruction task, with the only difference being the lack of color prediction. We chose this task as it has a linear forward operation and can be formulated as a convex optimization problem with the use of a convex loss function (like mean-squared error). In the referenced paper, the segmentation is done on different representations of the 3D space. By focusing on optimizing a method for reconstructing 3D scenes from 2D images, we allow for the use of any highly performant, open source pre-trained segmentation model to be used to produce binary masks to be lifted to 3D by our method. Thus, any improvement of 2D segmentation models can be directly reflected in our 3D segmentation output. Regarding decoders, our GA-Planes representation can be used as a feature embedding with any desired decoder including existing INRs as decoders; indeed our nonconvex GA-Planes model uses a standard nonconvex MLP decoder. For generative modeling, a similar approach like EG3D paper (using Tri-Plane representation) could be used with GA-Planes as the underlying representation (this is an exciting direction for future work). Response to E2 (clarification on video segmentation): The video segmentation task we consider is the same as 3D segmentation with 2D+time (xyt) replacing 3D (xyz). We show performance on temporal superresolution in this task, which is interpolating missing frames, similar to rendering missing view angles. Again, here we are taking per-frame segmentation masks as training input and our goal is to essentially interpolate these 2D masks into a coherent 3D mask across the video (including missing test frames). Response to E3 (shape/image representation): Our 3D segmentation experiment is performing shape fitting (with either direct 3D supervision via space carving, or indirect 2D “tomographic” supervision); we will clarify our description of this task in the revision. We also include image fitting experiments to support our theoretical analysis (figure 7 in appendix of original submission). Response to GA references: Thank you for pointing us to these models using geometric algebra. We are happy to include them in our revised related works discussion. Response to weaknesses: W1 (clarification on existing experiments): Please refer to our responses above to E1, E2, and E3. W2 (speed and memory): No, our method does not come with a speed or memory overhead. We compare the number of parameters (memory) and performance metrics such as PSNR in figure 3. We also added a plot comparing memory and training times of GA-Planes and K-Planes with the configurations as in figure 3 (https://imgur.com/a/Kgm12Pk). Note that the variations in training time are directly influenced by the input feature dimension that gets decoded by the MLP. Comparing GA-Planes and K-Planes with the same feature dimensions, their training times are on par. As noticeable on the plot, GA-Planes had larger feature dimensions on some models, leading to longer training times (we selected the configurations to optimize memory vs. PSNR, which we list on appendix table 5). Response to suggestions: Thank you for noticing the typo about referring to figure 1. We will put the full figure in the main text. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My doubts are mostly cleared, except for the video segmentation. To my understanding, an existing segmentation model (SAM) is used as GT. Why is it significant to show that GA planes can mimic an existing model better than Tri-Planes? --- Reply to Comment 1.1.1: Comment: Thank you for your response. To clarify, the task we are testing is temporal superresolution of segmentation masks. SAM is producing a segmentation mask for each frame in the video, which can be treated as the data preparation stage. We are trying to render the segmented video from a subset of the segmentation masks, which involves interpolating masks for the frames in between given frames. It’s analogous to our 3D object segmentation task where we are also not trying to create a segmentation mask from raw images but instead trying to lift 2D segmentation masks to 3D segmentations. Interpolation of unseen frames (temporal superresolution) is analogous to rendering images for unseen view angles. The point of comparing GA-Planes and Tri-Planes is to demonstrate the necessity of the GA-Planes parameterization to enable convex and semiconvex training with good performance. Tri-Planes is the most similar previously proposed model to ours that is compatible with convex reformulation, and we show that the convex formulation for GA-Planes performs consistently better in both video and 3D segmentation tasks.
Summary: This paper provides an analysis of the mixture of the n-dimensional (n<3) voxel representations for learning neural fields. As the authors mentioned, this voxel representation can be line, plane, or volume, which can be viewed as a low-rank or low-resolution representation to encapsulate the target scenes or images. By providing 4 theorems, the authors analyze the expressiveness of each different model design which is consistent with the experimental results. Moreover, in my understanding, this analysis and the results are also consistent with the related studies, such as K-Planes, TensorF, etc. Claims And Evidence: Yes. The claims in the paper are precise and correct. The authors provide four theorems that progressively investigate the expressiveness of the models in terms of the matrix rank. The first theorem is the most basic model that simply consists of line grids with a linear decoder, which is close to the matrix decomposition as the authors mentioned. The last theorem is the combination of the low-rank and low-resolution grids with non-linear MLP layers that are understood as the most expressive representations compared to the previous models explained in previous theorems. I fully understand these theorems and I think it is correct and well analyzed. Methods And Evaluation Criteria: Based on these four theorems, the authors provide a GA model that represents n-dimensional data (2D images or 3D scenes) with low-rank and low-resolution grid representation with non-convex MLP decoders. Figure 2 and Figure 3 support the efficacy of this method and the evaluation metrics are properly chosen. Moreover, the authors conduct 2D/3D segmentation experiments in Section 5.2, and video segmentation in Section 5.3. By doing so, the authors provide supportive results in 2D/3D/2D+temporal domains. While there are no experiments handling n-dimensional data, it is fine with me. Theoretical Claims: Overall, the theorems are correct. In the first theorem, the authors present the basic model using two matrices U, V which is the same as matrix decomposition. The second theorem is the replacement of matrix multiplication with matrix addition. This is to provide an analysis that the matrix addition has the limitation of expressiveness due to the upper bounded matrix rank. The third theorem is an extension of the first theorem using two-layered MLP decoder. The last theorem is the generalized version of the third theorem, which utilizes the standard MLP layers. For each step, the authors provide theoretical analysis in terms of the model expressiveness by checking the matrix rank, and these results are summarized in Table 1 of the manuscript. Experimental Designs Or Analyses: The authors conduct (1) image reconstruction in Figure 2, (2) radiance field modeling in Figure 3, and 2D/3D/Video segmentation in Table 2. By doing so, the authors well prove their claim about the model expressiveness in different domains, which looks sound and reasonable to me. Overall, the authors provide qualitative and quantitative results, and the results well support the author's claim. Especially. the results in Figure3 well explain the necessity of the low-rank and low-resolution grid representation. The authors compare the proposed GA model with two popular low-rank grid-based methods, TensorF, and K-planes, and the results from the proposed method outperform the previous methods. This is a strong clue to support the authors' claims. Supplementary Material: In Table 4 of the supplementary material, the authors provide technical comparisons with different grid-based methods. Here, I wonder what is the meaning of the model size, and why the voxel-based methods are annotated as X mark. Also, can the authors elaborate on the meaning of optimizability in Table 4? The overall analysis in Table 4 is quite ambiguous and difficult to understand. Relation To Broader Scientific Literature: So far, the recent studies in the neural fields using low-rank and low-resolution grids only provide empirical results without theoretical analysis in their model designs. At this point, this paper provide profound and theoretical reasons that align with the recent studies. I believe that this paper well describe their understanding in the hybrid grid representation and this will be beneficial to the researchers in this domain. Essential References Not Discussed: Overall, this paper well addresses the generic issue in the hybrid grid representation. I cannot find anything that the authors missed. In the supplementary material, the authors provide additional results and technical comparison with various methods, such as the hash-grid-based method [InstantNGP]. So, I believe there is nothing to discuss more. Other Strengths And Weaknesses: This paper is well written. Providing four theorems is highly admirable and I really enjoyed reading this manuscript. It well supports the results in this paper as well as the results in many recent studies, which claim why hybrid grid representation can be beneficial to their downstream tasks. I have a few concerns about this work. (1) Can the authors provide a comparison with volume volume-based method as well in Figure 3? Theoretically speaking, the full-volume grid representation can be the most expressive compared to other grid-based methods including the proposed GA model. However, it is widely known that the volume-only methods tend to fall into the local minima with relatively low fidelity results compared to the MLP-only methods, or MLP-voxel hybrid methods [MipNeRF, ZipNeRF]. For example in the paper of [MonoSDF]. The MLP-only methods outperform the volume grid-based methods in the surface reconstruction task. (2) In this perspective, I believe the low-rank problem can become a rescue to avoid falling into such a problem. While expressiveness is an important factor, it is not the only one that affects the final rendering quality in my opinion. __I hope that the authors provide an opinion about this issue in the rebuttal.__ (3) I wonder whether we can provide the expressiveness of the MLP-only method, such as Mip-NeRF. Can the authors provide their own analysis of this method? Other Comments Or Suggestions: I described my questions in the above section. Please refer to the one. Questions For Authors: I described my questions in the above section. Please refer to the one. Ethical Review Concerns: There is no ethical issue in this paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough review. Response to 1 (comparison with volume-based method): The pink dashed line marked as “GA-Planes ablation (volume only)” is a volume-based method. In the notation we use, this is D($e_{123}$). We believe the inferior performance is caused by the coarser resolution of the 3D grid compared to the higher resolution that 2D and 1D grids can afford with the same total parameter count. Increasing the resolution of the volume features is expensive in terms of memory. Based on our 2D theory, the use of an MLP decoder enables high effective rank (limited by the resolution) representations even through “low-rank” components such as lines. Thus, the volume-only model does not achieve the expressivity of a combination of high-resolution lines/planes at the same memory budget. Response to 2 (local minima vs model expressiveness for volume-based vs factorized and implicit models): We note that for NeRF training, the objective function itself is nonconvex, so regardless of the volume parameterization there is a risk of getting stuck in a local minimum. However, if we consider convex objectives such as shape/video/image fitting, then a 3D grid or a concatenation of tensor features (e.g. our convex GA-Planes model) should be able to optimize globally, whereas a tensor factorization involving feature products (e.g. our nonconvex GA-Planes model, and K-Planes) would still be nonconvex. As described above, we believe that the primary reason behind the performance improvement we observe compared to a 3D grid is based on the difference in resolution, which improves the representation capacity or effective rank of our models, rather than a difference in optimizability. Response to 3 (analysis of MLP-based models): With a purely MLP-based model, we cannot define an equivalent matrix factorization optimization problem like the ones we provide in our theoretical analysis of GA-Planes. However, some prior works [1] have analyzed the NeRF model with Fourier features through the lens of neural tangent kernels (NTKs). [1] https://arxiv.org/pdf/2006.10739 --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. Most of my concerns are clearly answered and I fully understand the authors' opinions. I have no further questions and I keep my original score, accept, as a final score. Thanks for the sincere and precise comments. Best,
null
null
null
null
null
null
null
null
A Non-isotropic Time Series Diffusion Model with Moving Average Transitions
Accept (poster)
Summary: This paper proposes a non-isotropic time series diffusion model with moving average transitions (MA-TSD). First, the authors empirically found that, when directly applying DDPM to time series data, the directions of model gradients at different diffusion steps are contradicted during training, which leads to unstable optimization. Second, they propose the moving average transition, which functions like a low-pass filter and keeps the informative low-frequency components. Meanwhile, they introduce the Non-isotropic forward process and the corresponding loss function, as well as the acceleration strategy (like DDIM) in inference. Extensive experiments are conducted, which demonstrate the salient performances of MA-TSD over existing DDPM-based diffusion models on time series-related tasks. Claims And Evidence: Yes, they are. 1. **Claim:** When directly applying DDPM to time series data, the directions of model gradients at different diffusion steps are contradicted during training. **Evidence:** Supported by Figure 2 2. **Claim:** The moving average schedule and the instance normalization are useful. **Evidence:** Tables 1,2,4. 3. **Claim:** The backward process of MA-TSD can be naturally considered as time series super-resolution. **Evidence:** Table 3 Methods And Evaluation Criteria: Yes. They make sense. Theoretical Claims: This paper does not have any theoretical claims. Experimental Designs Or Analyses: - In Figure 4, the factor-only accelerating strategy is only compared against randomly select subset strategy. To better validate its effectiveness, the authors should further include a comparison with uniformly sample strategy (e.g., the default DDIM approach, which uses uniformly spaced timesteps for acceleration). other experimental designs and analyses are sound. Supplementary Material: No supplementary material provided Relation To Broader Scientific Literature: The empirical findings that directly applying DDPM on time series data will cause contradicted gradients might be helpful for the long video generation task. Essential References Not Discussed: No Other Strengths And Weaknesses: Strength: - The empirical findings that direct application of standard diffusion to time series may cause gradient contradiction during training (due to the rapid decrease of low-frequency information in the diffusion process) is valuable,​and is insightful for the video generation community. - The proposed ​moving average schedule and ​instance normalization demonstrate effectiveness. - The ​factor-only acceleration strategy is effective. - The paper provides ​thorough experimental validation and ablation studies. Weakness - The paper should include an ​ablation study on the parameter L, and analyze how varying L impacts the severity of gradient contradiction when directly applying DDPM to time series. - The paper does not specify the ​model architecture (e.g., is it a 1D U-Net?) Other Comments Or Suggestions: A suggestion: The authors are suppose to ​cite relevant references in the tables (e.g., in table 1, table 2) to allow readers to easily trace the sources of compared methods. Questions For Authors: Beyond time-series forecasting, how applicable is this method to other scenarios, such as video generation/video prediction (treating frames as sequences)? According to Figure 2, existing video diffusion models may also suffer from gradient contradiction. However, considering that existing video diffusion models typically predict shorter video clips, to what extent is this issue critically impactful? For example, How does the similarity maps in Figure 2 change when the $L$ varies? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful for your valuable suggestions and insightful comments. We'd like to reply as follows: **1. Uniform sampling strategy** We've added the default DDIM sampling strategy to our experiments. Specifically, for a given sampling budget, we start with t=T and uniformly select the sampling steps. The results can be seen at (https://anonymous.4open.science/r/ICML2025_rebuttal-134C/sample_strategy.png). We can see that uniform sampling did offer a better result than random sampling, but the marginal gain on the performance decreases by $\tau/T$. Compared to both random and uniform strategies, our factor-only strategy also shows superior performance. **2. Ablation on $L$** We did an ablation study on the sequence length $L={48, 96, 192}$ and recorded the gradient similarity in this link (https://anonymous.4open.science/r/ICML2025_rebuttal-134C/ablation_L.png). We observe that for all L, gradient conflicts still exist when directly applying DDPM on time series data, and our method can keep alleviating. It should be noted that the severity of such conflicts is not changed with L varying. Instead, we think the gradient conflicts mainly come from the degradation design, and in our framework, MA always keeps more low-frequency information across different diffusion steps. These low-frequency components could serve as common information during diffusion training. Thus, the inputs for denoising models are more structurally similar, and consequently, the gradients across different steps are similar. **3. Backbone architecture** For the denoising backbone, we mainly rely on the DiT, with an adaptation to 1D time series. For the condition encoder in the forecasting task, we adopt an MLP with RevIN[1], which is widely used in TS forecasting. In our paper, we don't focus much on the neural architecture design, and we believe that the backbones can be replaced with other reasonable networks. Combining the SOTA time series architecture with our framework could also be one of our future works. **4. Applicable to video generation** We appreciate your insightful comments for inspiring us to explore more of our framework. Though we are not experts in the field of video generation, we still try to evaluate our framework on a simple "video-like" time series. Specifically, we believe that the transition between frames in a video should be smooth and mild because even the most common frame rate (like 24 FPS) is higher than the sampling rate of time series data. Thus, we manually create a pure 2-D signal: $x(t)=\sqrt{0.5 t} + 0.7 \cos(t) + 0.3 \sin(4 t), y(t)=\sqrt{0.2 t} + 0.8 \cos(t) + 0.2 \sin(3 t)$ with $t\in[0, 4\pi]$ recorded with 1280 time steps (mimicking a 53s video in 24FPS), and concatenate them as a bivariate time series. Then, we slide temporal windows with $L={48, 96, 192}$ and record the gradient similarity of directly using DDPM and our framework. As shown in (https://anonymous.4open.science/r/ICML2025_rebuttal-134C/simple_syn.png), we found a similar phenomenon, and our framework also works to ease the gradient conflicts to some extent. However, we think more framework adaptation is needed to bridge the large modality gap for video generation. In summary, our framework shows similar effects on our synthetic "video-like" time series, and we reckon that our high-level idea could also be applied to video generation with some necessary adaptation. We sincerely hope that our framework could also be enlightening and supportive for other fields beyond time series. [1] Reversible Instance Normalization for Accurate TSF Against Distribution Shift --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I sincerely appreciate the additional experiments, especially the use of 2D time-series data to mimic video signals. All the results support the claims in the paper. Although the authors are not experts in video generation, they have made a commendable effort to demonstrate the promising applicability of their method to this field. A minor suggestion: Currently, the "uniform sampling" curve does not include a confidence interval like that of "random sampling". This might be due to the tight rebuttal schedule, which constrains the repeated experiment run. It would be beneficial to include the confidence interval in a future version. All my concerns are solved, and I will raise my rating to accept. --- Reply to Comment 1.1.1: Comment: We really appreciate your acknowledgment of our efforts, as well as all of your kind comments and advice that help us to improve our work. We are also grateful for understanding on the tight rebuttal schedule. Regarding your minor suggestion on the uniform sampling, we will add it in our future updated version.
Summary: This paper proposes MA-TSD (Moving Average Time Series Diffusion), a novel time series diffusion model that replaces the standard isotropic diffusion process with a moving average transition. The key motivation is that existing isotropic diffusion models degrade low and high-frequency components identically, which is problematic for time series data where low-frequency components often carry more essential information. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, but a generative evaluation is missing, as described in the weaknesses response. Theoretical Claims: No Experimental Designs Or Analyses: No Supplementary Material: No Relation To Broader Scientific Literature: Time series generative models Essential References Not Discussed: Yes, the paper is about a generative model, yet it does not address time series generation literature. Other Strengths And Weaknesses: **Strengths:** - The use of different diffusion processes for different frequency levels is a novel and insightful approach. - The gradient analysis provides valuable insights and strengthens the motivation behind the method. **Weaknesses:** - Despite proposing a new generative approach, the paper does not evaluate it on standard generative modeling benchmarks [1, 2, 3]. - In the super-resolution section, the improvement is demonstrated against only one diffusion model, limiting the assessment of its adaptability to other diffusion frameworks like EDM [4], FM [5], or alternative architectures. - The forecasting benchmark lacks comparisons with state-of-the-art forecasting models. While I acknowledge the distinction between deterministic and probabilistic forecasting, a direct comparison remains essential. [1] Interpretable Diffusion for General Time Series Generation [2] Utilizing Image Transforms and Diffusion Models for Generative Modeling of Short and Long Time Series [3] Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [4] Elucidating the Design Space of Diffusion-Based Generative Models (EDM) [5] Flow Matching for Generative Modeling Other Comments Or Suggestions: No Questions For Authors: See weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the valuable feedback and the opportunity to strengthen our experiments. Below, we address each of the weaknesses: **1. Standard Time Series Generation** We agreed that the standard generation task is necessary for evaluating our model. Thus, we follow the setting of your mentioned benchmarks and experiment on three datasets, ETTh2, Exchange, and ECG (newly added, medical time series). We also include discriminative score and predictive score as additional metrics for evaluating the fidelity and usefulness, as the following table shows: | Dataset | Model | Disc. Score | Pred. Score | Context-FID | | :------: | :--------: | :---------: | :---------: | :---------: | | ETTh2 | KoVAE | 0.069 | 0.034 | 0.258 | | | ImagenTime | 0.053 | 0.054 | 0.118 | | | **ours** | **0.044** | **0.026** | **0.075** | | Exchange | KoVAE | 0.137 | 0.038 | 1.520 | | | ImagenTime | 0.129 | 0.067 | 1.112 | | | **ours** | **0.030** | **0.027** | **0.083** | | ECG | KoVAE | 0.459 | 0.081 | 1.206 | | | ImagenTime | 0.400 | 0.079 | 1.223 | | | **ours** | **0.345** | **0.076** | **0.979** | We can see that compared to the SOTA time series generation models, our method still shows salient improvements in discriminative score and predictive score, illustrating the capability of generating high-fidelity and useful synthetic time series samples. **2. Super-resolution(SR)** We included flow matching with Variance Preserving path (FM-VP) as our additional SR benchmark. The backbone used in FM-VP is kept the same as our original benchmark and model. The following table records the comparison: Table.1 Consistency error comparison | Scale | Model | MFRED | Wind | Solar | | :---: | :---: | :-------: | :-------: | :-------: | | 3 | ours | **0.003** | **0.007** | **0.011** | | 3 | FM-VP | 0.033 | 0.064 | 0.052 | | 6 | ours | **0.004** | **0.010** | **0.012** | | 6 | FM-VP | 0.024 | 0.050 | 0.036 | | 12 | ours | **0.005** | **0.014** | **0.013** | | 12 | FM-VP | 0.016 | 0.035 | 0.024 | Table.2 Context-FID comparison | Scale | Model | MFRED | Wind | Solar | | :---: | :---: | :-------: | :-------: | :-------: | | 3 | ours | **0.105** | **0.286** | **0.349** | | 3 | FM-VP | 1.348 | 4.231 | 0.795 | | 6 | ours | **0.124** | **1.024** | 0.697 | | 6 | FM-VP | 1.338 | 4.468 | **0.685** | | 12 | ours | **0.436** | **3.057** | 1.413 | | 12 | FM-VP | 1.522 | 4.874 | **0.862** | Despite the slight inferiority in Context-FID on the Solar dataset when the SR scale goes up, our proposed method still outperformed the FM-VP model generally, especially in terms of consistency. It should be noted again that we perform SR naturally through our backward process instead of retraining the whole conditional model like benchmarks. Therefore, we think our method provides a trade-off of (1) training overheads, (2) SR quality, and (3) consistency to the low-resolution input. **3. Forecasting** Although our paper focuses on how to improve the time series diffusion model, we also agree that it's necessary to include SOTA time series forecasting methods as a reference. Therefore, we included Autoformer[1], Non-stationary Transformer (NSformer)[2], and PatchTST[3] to compare deterministic forecasting performance. We run all models in the same setting in our paper, i.e., L = {96, 192, 336, 720}. | | Electricity | ETTh2 | ETTm2 | Exchange | traffic | weather | rank | | :--------- | ----------: | ----: | ----: | -------: | ------: | ------: | -------: | | Autoformer | 0.594 | 0.218 | 0.168 | 0.601 | 0.267 | 0.293 | 3.83 | | NSformer | 0.367 | 0.230 | 0.146 | 0.440 | 0.229 | 0.278 | 2.83 | | PatchTST | 0.412 | 0.202 | 0.122 | 0.500 | 0.179 | 0.189 | 1.83 | | **ours** | 0.340 | 0.212 | 0.124 | 0.372 | 0.166 | 0.207 | **1.50** | Regarding overall performance, our model still ranks first among these benchmarks, though it is slightly inferior to ETTh2 and weather compared to PatchTST. It should be noted that these SOTA architectures are particularly tailored for time series forecasting and well adapted to the benchmark datasets, while forecasting is one of the downstream applications of our proposed MA-TSD framework. Therefore, we think there exists great potential to accommodate the SOTA architectures into our MA-TSD framework to have a better forecasting performance in our future work. [1] Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting [2] Non-stationary Transformers: Exploring the Stationarity in Time Series Forecasting [3] A Time Series is Worth 64 Words: Long-term Forecasting with Transformers
Summary: When training a standard diffusion model on time series dataset, (Contribution 1) the authors identified that gradients conflict between small t and large t values, which hinders training. To address this issue, they propose (Contribution 2) a heuristic solution by adding "moving average" as an additional corruption to the forward equation, which is widely used in time series data. Additionally, since mean and variance play important roles in time series data, they trained the model for post-denormalization from z to x separately. They also propose an accelerated sampling method similar to DDIM. Experiments were conducted on six time series datasets, (Contribution 3) showing the highest performance compared to baseline methods in forecasting and super-resolution tasks. Ablation studies demonstrate that the proposed methods significantly contributed to these performance improvements. Claims And Evidence: There are two claims in this paper: (1) When training diffusion models on time series data, gradient conflicts occur between small timesteps and large timesteps. (2) This issue can be mitigated through moving average data degradation. And Claims 1 and 2 are supported phenomenologically through Figure 2. However, the explanation of why MA-TSD prevents gradient conflict, i.e., Claim 2, still seems insufficient. Regarding the methodology of corruption through moving average, it would be much better if they could provide a more developed theoretical or intuitive motivation through toy examples beyond "time series data has more important low-frequency signals compared to other data." (Minor Q) To my knowledge, gradient conflict phenomena and instability issues based on timestep regimes also occur in image datasets [1]. However, blurring diffusion models [2], which is similar method from this paper, considered do not effective as demonstrated in this paper. Could the reasons be: (1) moving average is more effective in time series data? (2) learning in latent space through VAE solves this problem? I'm curious about the inconsistency where similar methods weren't effective for images but achieved significant performance improvements in time series data. (3) Or is it because the noise schedule is determined by Eq. (13)? I'm curious about your opinion or intuition. Experimental evidence is plus. [1] Truncated Consistency Models, https://arxiv.org/abs/2410.14895, ICLR25 [2] Blurring Diffusion Models, https://arxiv.org/abs/2209.05557 Methods And Evaluation Criteria: MA-TSD has methodological similarities to blurring diffusion models, but has (1) a distinct motivation regarding "gradient conflict" and (2) shows notable performance improvements in time series data, thus having distinguishable contributions. Evaluation was conducted on six datasets using MSE and CRPS for time series forecasting, and Consistency and Context-FID for super-resolution evaluation. Performance improvements were shown across almost all datasets and metrics. Though I'm not an expert in this field, they conducted sufficiently extensive evaluations and demonstrated impressive performance improvements. Theoretical Claims: There are no theoretical claims in this paper that require verification. Experimental Designs Or Analyses: There are no special issues with the experimental settings in this paper. A notable point is that Table 4's ablation study accurately shows the performance contribution size of moving average and instance normalization. I personally find the performance contribution of MA impressive. Supplementary Material: I checked all the Supplementary Material. Relation To Broader Scientific Literature: Research that achieves performance improvements through changes to the forward equation has actually become an almost abandoned research direction in images, so it's impressive that it has been rediscovered in time series data. This shows that even if it wasn't effective for images, there is room for more research on forward equations for video or audio that have time series characteristics. Some might evaluate this research's novelty as low, but I'd like to assign high novelty in terms of rediscovery. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I think the writing and the presentation is quite clear. As I've mentioned several times, the greatest value of this paper is showing that a method I thought had been abandoned can still be valid in other domains. Other Comments Or Suggestions: I hope to see qualitative results from various methods, M3VAE, TMDM, MR-DIFF, to understand the qualitative difference between various methods. Questions For Authors: Q1: What exactly is the difference from Blurring Diffusion Models [1]? I'm curious about the points where the contribution is distinguished. If I can clearly understand where the contribution is distinct, I would like to raise my score to accept. Q2: The fact that moving average aligns gradient signals by timestep seems like a very surprising discovery to me. Do you have any theory or intuition that can explain this? The reason that "time series data often have more informative low-frequency component" doesn't seem sufficient to me. [1] Blurring Diffusion Models, https://arxiv.org/abs/2209.05557 Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate your acknowledgement of our work. We would like to address your questions as follows. **Q1: The difference between Blurring Diffusion Model (BDM) and ours** From a high-level perspective, BDM and ours shared a similar idea, i.e. building the degradation process with low-pass filters (blurring in BDM, MA in ours). However, there exists clear distinctions. 1. **Filtering space**: Though we think it's minor due to convolution theoram, we still point out at first to pave the way for the following parts. Specifically, we filtered the data in the time space by convolution (matrix multiplication),$q(\mathbf{x}_t | \mathbf{x}_0) = \mathcal{N} (\mathbf{K}_t\mathbf{x}_0,\beta_t^2 \mathbf{I})$ while BDM blurs the images in the frequency domain and transform back to the pixel domain, i.e. $q(\mathbf{x}_t|\mathbf{x}_0) = \mathcal{N} ( \mathbf{V} \boldsymbol{\alpha}_t \mathbf{V}^\top \mathbf{x}_0, \sigma_t^2 \mathbf{I})$, where $\mathbf{V}^\top,\mathbf{V}$ are DCT and IDCT, and $\boldsymbol{\alpha}_t$ is the frequency response of Gaussian blurring kernel, a diagnoal matrix, whose each entry $\alpha_t^i \in (0, 1]$ is a coefficient of i th frequency component. For low pass filters, $\alpha_t^i$ decreases by i until (nearly) zero to suppress high frequency components. 2. **Markovian or not (Major)**: Since $\boldsymbol{\alpha_t}$ is diagnoal, BDM proposed that for each frequency component, a standard Markovian DDPM can be constructed. The one-step transition is accordingly defined, i.e. $q(u_t | u_s) = \mathcal{N}(\alpha_{t|s}u_s, \sigma_{t | s}^2)$, where s=t-1, $u_t=V^\top x_0$ is the frequency representation and $\boldsymbol{\alpha}_{t | s}=\boldsymbol{\alpha}_t / \boldsymbol{\alpha}_s$. However, dividing $\boldsymbol{\alpha}_s$ could be problematic in practice, because $\alpha_t^i$ could become to be (nearly) zero for large i, so dividing $\boldsymbol{\alpha}_s$ is numerically unstable for all diffusion steps. Therefore, though BDM claimed to have Markov transition under the DDPM framework, we think it's improper to define $q(x_t | x_s)$ when the transition operation from $x_0$ to $x_t$ is non-invertible. In our framework, faced with similar invertible MA, we bypassed the definition of $q(x_t|x_s)$, assumed $q(x_1:T | x_0)$ non-Markovian (in the DDIM-style) and then delicately defined $q(x_s|x_t, x_0)$ to satisfy $q(x_t|x_0)$ for all t. Thus, whether it's Markovian or not is another distinct difference from ours and BDM. 3. **Noise schedule**: BDM designed $\boldsymbol{\alpha}_t = a_t \mathbf{d}_t$, where $\mathbf{d}_t$ is the frequency response of blurring kernel and $a_t \in [0,1]$ is an extra scaler decreasing by t, and the noise schedule $\sigma_t = 1-a_t^2$. In our framework, the noise schedule $\beta_t$ is dataset-based, chosen regarding the variance decrease caused by MA (Eq.13) on different datasets. To compare these two, we mimic BDM's noise schedule, i.e. multiplying an $a_t$ to $\mathbf{K}_t$ and set $\beta_t = 1-a_t^2$, and ran the experiments on forecasting task (L=96). We can see that test errors indeed increased, which might be because of BDM's $\sigma_t$ schedule ignored the extra effects of blurring ($\mathbf{d}_t$) and only consider the $a_t$. | Dataset | Model | MSE | CRPS | | :---------: | :------: | :---: | :---: | | electricity | ours | 0.288 | 0.158 | | | ours+a | 0.303 | 0.158 | | ETTh2 | ours | 0.136 | 0.121 | | | ours+a | 0.166 | 0.130 | | Exchange | ours | 0.098 | 0.110 | | | ours+a | 0.193 | 0.136 | In summary, despite the similar high-level idea, there exist clear differences between BDM and ours. BDM tried to fit in the standard DDPM framework, while we reformulated a framework with special adaptation on moving average filters and time series. Unfortunately, BDM didn't release their official codes, and we are unable to reproduce it on time series data for more quantitative comparison. **Q2: Moving average aligns gradient signals** By intuition, a mild data degradation in diffusion should benefit the learning of denoising networks, since the inputs could be mostly noise if degradation is too fast. For example T=500 in Fig.2, the gradients at t> ~190 (right down block) are similar because x_t are almost noise and the model can hardly learn anything. For t < ~190 (left up block), they are more informative and contribute to denoising learning more. Thus, we'd like to "prolong" the left up block to help the learning. Meanwhile, to reconstruct a time series, low-frequency (LF) information is usually more helpful. Thus, we're motivated to use MA, a widely used TS low-pass filter, to design a mild degradation where LF is kept more and serves as common information for all diffusion steps. In this way, the inputs for denoising models are more structurally similar so that gradients conflicts at different steps could be less. **Others**: Forecasting examples are in https://anonymous.4open.science/r/ICML2025_rebuttal-134C/forecast_plot.png --- Rebuttal Comment 1.1: Comment: I appreciate the author's hard work. I had questions about the exact differences between BDM and your formulation, and what distinct contributions each has, but those parts have been clarified. Accordingly, I'm raising my score to **accept**. > Q1: The difference between Blurring Diffusion Model (BDM) and yours Thank you for the explanation. To summarize my understanding: - Thanks to non-Markovian properties: (1) You can avoid explicitly defining q(x_t|x_0) which has numerical stability issues, and (2) The formulation can work even when the corruption from x_0 to x_t is not invertible. (But, actually, I still don't fully understand exactly what the advantage of point 2 is) - Thanks to data-dependent noise schedule: Performance improvement These are the two differences from BPM. I now understand which parts are similar and which are different, and accordingly, I'll adjust my score upward. It would be good to briefly mention this somewhere in the paper. Rather than emphasizing that there's a distinct contribution, it would be better to explain what changes were necessary when applying BDM's ideas to time series. > Q2: Moving average aligns gradient signals There's a logical jump, but it's an interesting perspective. First, regarding the amount of information learned at each timestep, [1] proposes a noise schedule that makes the amount of information to be learned at each timestep even. This research seems to suggest that simply adjusting the noise schedule is insufficient and that it also depends on the characteristics of the information contained in the data. I find your intuition quite convincing, and it's an interesting intuition that explains why your method works well. However, just because the information learned at each timestep is evenly distributed doesn't necessarily mean that gradient conflicts won't occur. This is because the types of information learned at each timestep can differ. In fact, in Figure 2 left down, it appears that the gradient from small diffusion timesteps is interfering with other regions. This seems somewhat inconsistent with the intuition given that it's an area with sufficient remaining information. Nevertheless, the intuition that if information is removed too quickly, proper learning doesn't occur in later timesteps resulting in meaningless gradients, seems reasonable. Thank you for the explanation. [1] Continuous diffusion for categorical data, https://arxiv.org/abs/2211.15089 --- Reply to Comment 1.1.1: Comment: We greatly appreciate your insightful and valuable discussion on both questions, as well as your acknowledgment of our paper. For your further comments, we'd like to explain a bit more about the effects of non-Markovian design and have some general discussion on the gradient conflicts. **1. Non-Markovian design** In fact, whether it's Markovian or not affects the formulation of backward process, i.e. $q(\mathbf{x}_{t-1} | \mathbf{x}_t, \mathbf{x}_0)$. In the BDM case, the DDPM-like Markovian assumption can simplify the calculation of $q(\mathbf{x}_{t-1} | \mathbf{x}_t, \mathbf{x}_0)$, but requires the formulation of q(x_t | x_0) and q(x_t | x_t-1). If we'd like to define the q(x_t | x_0) with non-invertible low pass filters, then the q(x_t | x_t-1) will be badly defined because it involves inverting blurring kernels (Eq.16-18, 21 in BDM), which is practically unstable. Though some epsilons can be added to ensure the codes "runable", tiny errors in the high frequency components will still be amplified a lot through the iterative backward process. Chances are that the generated data are dominated by the improperly amplified high frequency components. In our case, we bypassed the Markovian assumption, so the formulation of q(x_t | x_t-1) is also skipped. Instead, we formulate $q(\mathbf{x}_{t-1} | \mathbf{x}_t, \mathbf{x}_0)$ in a DDIM style as long as it satisfies q(x_t | x_0). The resulting backward equation (Eq. 14, in our paper) doesn't involve any inverting operation of MA kernels. Thus, we can essentially avoid such problem with the non-Markovian design. We also highly agree that it would be better to mention these differences in the paper, and we will thoroughly integrate your suggestions into our revised manuscript. **2. Gradient conflicts** Thanks for your insightful opinion on this problem. According to our understanding of your opinion, gradient conflicts may come from two parts (1) the uneven information distribution at different steps and (2) the differences in the information that the denoising network tries to learn. Indeed, our intuition mainly came from the focus on the first part. Accordingly, our framework is also aimed for alleviating the gradient conflicts caused by the uneven information distribution. We totally agree that analyzing the joint contribution to gradient conflicts of both parts could help us have a deeper understanding on diffusion training, and we will keep investigating on such topic in our future works.
Summary: This paper presents a non-isotropic time series diffusion model (MA-TSD) for time series analysis. The key idea is to use a moving average in the forward process to better preserve low-frequency information, thereby avoiding gradient conflicts during training. The model also features an accelerable backward process, similar to DDIM, which can be viewed as time series super-resolution, and employs instance normalization to standardize the data. Experiments on real-world datasets (such as Electricity, ETTh2, and Traffic) show that MA-TSD outperforms existing diffusion-based models in both forecasting and super-resolution tasks. Claims And Evidence: The paper argues that using standard DDPM on time series causes gradient conflicts because low-frequency information decays too quickly, leading to inconsistent data perception across diffusion steps. It claims that incorporating a moving average in the forward process helps preserve low-frequency components, resulting in more stable training and higher-quality generation. These claims are backed by both theoretical analyses (such as gradient similarity matrices and spectral energy ratio comparisons) and experimental evidence on datasets like Electricity. However, the comparisons are mainly limited to diffusion models and do not include mainstream non-diffusion methods like Transformers or RNNs. Methods And Evaluation Criteria: The method combines a moving average forward process, instance normalization, and an accelerable backward process to better handle time series frequencies and preserve low-frequency information. It is evaluated on forecasting and super-resolution tasks using metrics like MSE, CRPS, and Context-FID. Overall, these techniques and metrics are suitable for the problem, though the model is complex and might be computationally demanding for large datasets. (Note: My background in time series is limited, so I may not fully grasp all the details of these evaluation metrics.) Theoretical Claims: The paper presents clear and rigorous derivations for the forward and backward processes, noise scheduling, and the conditional joint distribution. These proofs are based on fundamental principles of probability and mathematics. Although the derivations can be dense for non-experts, no major errors were found. Experimental Designs Or Analyses: The experiments are well-designed. The authors tested their model on six real-world datasets with different time patterns, using metrics such as MSE, CRPS, and Context-FID, and they performed ablation studies to evaluate the key components of MA-TSD. The results clearly demonstrate the model’s advantages in training stability, low-frequency information retention, and generation efficiency. Including comparisons with non-diffusion models could further improve the evaluation. Please note that my understanding of time series' research is limited, so some of my comments might not be completely accurate. Supplementary Material: I reviewed the supplementary material, focusing on the additional theoretical proofs, derivations, and experimental details (e.g., the derivation of the forward process, noise scheduling, and backward process). However, I must note that I am not a time series expert, so my understanding of these supplementary details is somewhat limited. Relation To Broader Scientific Literature: The paper builds on established diffusion models like DDPM and DDIM, adapting them for time series by using a moving average to preserve low-frequency information and improve training stability. This approach extends ideas from image and video synthesis to sequential data tasks such as forecasting and super-resolution. Please note that I am not a time series expert, so my understanding of the broader literature in this area is limited. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - The model demonstrates superior performance in both time series forecasting and super-resolution tasks. - The work combines rigorous theoretical analysis with comprehensive experimental validation, lending strong support to its claims. Weaknesses: - Despite the accelerated backward process, the overall model structure is complex, and the computational demands may be high when processing large-scale time series data. - The experiments are primarily conducted on specific datasets, so the model’s performance on other types or domains of time series data remains to be further verified. Again, I am not an expert in time series. Other Comments Or Suggestions: Nothing for this part. Questions For Authors: No, just reply the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your time and effort in reviewing, as well as your constructive feedback, which helps strengthen our work. We would like to address your questions as follows. **1. Lack of non-diffusion benchmarks for comparison** We agree that non-diffusion benchmarks are also necessary for general comparison, even if our work focuses on the improvement of time series diffusion models. We've added related experiments with Autoformer, Non-stationary transformer, and PatchTST for time series forecasting. | | Electricity | ETTh2 | ETTm2 | Exchange | traffic | weather | rank | | :--------- | ----------: | ----: | ----: | -------: | ------: | ------: | -------: | | Autoformer | 0.594 | 0.218 | 0.168 | 0.601 | 0.267 | 0.293 | 3.83 | | NSformer | 0.367 | 0.230 | 0.146 | 0.440 | 0.229 | 0.278 | 2.83 | | PatchTST | 0.412 | 0.202 | 0.122 | 0.500 | 0.179 | 0.189 | 1.83 | | **ours** | 0.340 | 0.212 | 0.124 | 0.372 | 0.166 | 0.207 | **1.50** | Regarding overall performance, our model still ranks first among these benchmarks, though it is slightly inferior to ETTh2 and weather compared to PatchTST. It should be noted that these SOTA architectures are particularly tailored for time series forecasting and well adapted to the benchmark datasets, while forecasting is one of the downstream applications of our proposed MA-TSD framework. Therefore, we think there exists great potential to accommodate the SOTA architectures into our MA-TSD framework to have a better forecasting performance in our future work. **2. Computational demands** We'd like to analyze the computation burden of our method from two perspectives, i.e., spatially and temporally. Spatially: It should be noted again that the training process of our framework is almost identical to the standard diffusion, i.e., sample a batched time series window, degrade it, and use a denoising network to learn. The main difference lies in the forward (degrading) process. In our framework, we need to store a series of kernel matrices $\boldsymbol{K}_t$ in advance for degrading at different diffusion time steps, which in practice is a tensor with shape $T \times L \times L$ (T=diffusion steps, L=sequence length). Thus, if we try to model an extremely long sequence, it could increase the storage demand in quadratic growth. Fortunately, $\boldsymbol{K}_t$ is sparse because only unconvolved time steps remain zeros, and such sparsity decreases by t. For example, when $T=100, L=576$, the sparsity ($1 - oc / L^2$, $oc$ is the number of non-zero entries) at $t=20,40,60,80$ is [0.979 0.945 0.879 0.694]. Therefore, though we need to store a series of kernel matrices, they are mostly highly sparse, thus not imposing much storage burden. Temporally: The most time-consuming part of diffusion models is sampling, but we can utilize the accelerated backward process to speed up, especially in the time series super-resolution task. As we illustrated in the paper, we conduct SR naturally through our backward process without retraining the whole model like benchmarks that rely on conditional inputs. In practice, we used only one NVIDIA 4090 24GB to conduct all the experiments. Even if we run a $L=720$ task from scratch on the largest dataset (ettm2), it only takes $\textbf{0.72}$ GPU hours to train and sample, thus no significant computational demands in our method. **3. Dataset diversity** We highly agree that dataset diversity is important for evaluation. For now, our used datasets have included Traffic (traffic system), Electricity/MFRED (power system), Exchange_rate (financial market), ETT (IoT sensors), Weather (meteorological system), and Solar/Wind (renewable energy). These are widely used benchmark datasets in the field of time series [1,2]. Beyond these, we also added an ECG dataset (medical system) during the rebuttal period for a more complete evaluation. We launched a standard time series generation experiment on the ECG dataset and included two more benchmarks, i.e., KoVAE (RNN-based) and ImagenTime (Diffusion-based). The results are shown below. | Dataset | Model | Disc. Score | Pred. Score | Context-FID | | :------: | :--------: | :---------: | :---------: | :---------: | | ECG | KoVAE | 0.459 | 0.081 | 1.206 | | | ImagenTime | 0.400 | 0.079 | 1.223 | | | **ours** | **0.345** | **0.076** | **0.979** | Disc. Score evaluates the fidelity by training a post-classifier to tell whether the data is real or not. Pred. Score evaluates the usefulness by training an RNN forecasting model with the generated data and testing it on the real [3]. Thus, we can see the superiority of our method on both fidelity and usefulness over benchmarks. [1] Deep Time Series Models: A Comprehensive Survey and Benchmark [2] A Survey on Diffusion Models for Time Series and Spatio-Temporal Data [3] TSGBench: Time Series Generation Benchmark
null
null
null
null
null
null
Learning with Expected Signatures: Theory and Applications
Accept (oral)
Summary: This paper establishes a rigorous framework for the expected signature of stochastic processes, proving consistency and asymptotic normality under a double asymptotic regime—where the discretization mesh ($\pi$) tends to zero (in-fill asymptotics) and the number of observations $N$ increases (long-span asymptotics). It also introduces a martingale-based variance reduction technique that further improves finite-sample performance, with empirical validations. ## Update After Rebuttal I raised my score as a result of a discussion with the authors (see below). Claims And Evidence: This paper makes several notable contributions that extend and deepen their theoretical utility in machine learning. (i) Bridging Discrete and Continuous Frameworks: The paper rigorously establishes conditions under which the empirical expected signature estimator—computed from discretely observed, piecewise linear approximations of a continuous-time process—converges (in a double-asymptotic sense) to the true expected signature of the underlying latent continuous-time stochastic process. The authors establish consistency and asymptotic normality (Thm. 2.8 / 2.10) under regularity, mixing, and moment conditions. Detailed proofs in the Appendix use techniques from Rough Path theory and stochastic analysis. (ii) Variance Reduction via Martingale Correction: Recognizing that the classical expected signature estimator can suffer from high variance in finite samples, the paper introduces a modified estimator that employs a control variate derived from the properties of the signature in the martingale setting. By leveraging the Stratonovich integral’s properties, this martingale correction leads to significantly lower mean squared error. The authors back up this theoretical finding with empirical experiments demonstrating improved performance in tasks such as time series classification and option pricing. Methods And Evaluation Criteria: While the authors evaluate their approach on a classification task, there are practical concerns regarding scalability in modern machine learning settings due to the curse of dimensionality. Specifically, when a $d$-dimensional time series is mapped to its signature truncated at level p, the resulting feature vector has dimension $\sum_{k=0}^{p} d^k$. This growth can become prohibitive for high-dimensional and multimodal data—for example, video streams where d might be on the order of $256 \times 256$ or more. Moreover, for many classification tasks, task-specific representations (such as those learned by deep neural networks that selectively discard irrelevant information) may achieve superior performance compared to employing a full sufficient statistic like the expected signature. Despite these practical limitations, it is important to note that the primary focus of the paper is on theoretical contributions. Given this theoretical orientation, the experimental evaluations serve primarily to validate the underlying mathematical results rather than to claim state-of-the-art performance in classification. Consequently, while the potential practical challenges of scaling signature methods to very high-dimensional data are acknowledged, they should not be considered grounds for rejection of the paper. Theoretical Claims: I have reviewed the high-level theoretical claims and proof sketch but there remains a possibility that oversights exist. Experimental Designs Or Analyses: See the above “Methods and Evaluation Criteria” section. Supplementary Material: The authors do not provide supplementary material. Relation To Broader Scientific Literature: This paper bridges the gap between the theoretical framework of expected signatures—defined via nested iterated integrals in continuous time—and their practical estimation from discrete time series data. By rigorously establishing unbiased estimation and asymptotic properties for the expected signature, the authors connect foundational ideas from Rough Path theory with estimation techniques common in practice, thus broadening the future research of signature methods in machine learning. Essential References Not Discussed: Essential references are adequately discussed. Other Strengths And Weaknesses: The paper employs several technical terms—such as BV, the shuffle property, p-variation, and words $I$—without providing sufficient introductory guidance or intuitive explanations. While these concepts may be well-known to experts in stochastic processes and Rough Path theory, the ICML audience is broad and interdisciplinary. To enhance clarity and accessibility, the authors are encouraged to include a concise glossary or an appendix that offers reader-friendly definitions and intuitive explanations of these key terms. Other Comments Or Suggestions: The paper was an enjoyable read. Please note that my comments represent my initial impressions and may include misunderstandings. I welcome further discussion on these points and am open to revising my score once my questions and concerns are adequately addressed. Questions For Authors: Do you plan to make your implementation code publicly available in the near future? Given the complexity of the theory, having accessible code would greatly help in ensuring reproducibility. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the comments. We believe the reviewer well understood the main contributions of the paper, which they highlighted in "Claims and Evidence". **Methods And Evaluation Criteria** In practical applications the (expected) signature transform may face the curse of dimensionality. Nevertheless estimation of the expected signature is still a relevant topic as, at least for low dimensional time-series, it provides a theoretically motivated domain-agnostic baseline model. Moreover, ongoing work is being carried out to "a priori" select a transformation of the $\sum\_{k=0}^p d^k$ truncated (expected) signature feature vector retaining its essential characteristics. For example in [2] the authors introduce a linear state-space transformation such that redundant information can be easily removed from the signature vector by (a further) truncation. Usually, a domain-specific architecture – for example, in the case of image/video data, a convolutional layer targeting the known topological structure of the input – will likely improve empirical model performance (cf. Remark E.1). In such cases, the (expected) signature can be used as a layer in a neural network, see [3]. When working with video streams, one might consider a sequential architecture with an initial embedding layer mapping the data to a lower dimensional space before computing the (expected) signature. The empirical applications discussed in this paper provide a set of simple benchmark tasks on which to evaluate the performance of the martingale correction (such simple models do not and should not be understood as trying to achieve SOTA results). The scope of this work was not to develop new expected signature-based architectures but rather, as well understood by the reviewer, to fill a gap in the theory and, under suitable conditions, provide an improved empirical estimator for the theoretical expected signature. [2] Bayer and Redmann, Dimension reduction for path signatures, preprint, https://arxiv.org/pdf/2412.14723 [3] Bonnier et al., Deep Signature Transforms, NeurIPS 2019, https://arxiv.org/pdf/1905.08494 **Other Strengths And Weaknesses** We shall add intuitive definitions of technical terms in the appendix, the following is an extract from said glossary. *Informal glossary* The $p$-variation of a path is a measure of its regularity. For the purpose of our discussion it suffices to note that paths that have finite $p$-variation for low $p$ are more regular. A bounded variation (BV) path is a path with finite 1-variation (also known as total variation), this regularity ensures there exists a well-defined notion of integral against this path (e.g. a piecewise linear paths or continuously differentiable path) and hence we can easily define its signature as in Equation (2). Many interesting stochastic processes (e.g. those driven by Brownian motion) have infinite 1-variation (i.e. are not BV) but have finite $p$-variation for all $p>2$ and hence defining their signature requires rough path theory. The shuffle property of the signature is an algebraic property stating that the product of two signature terms is a linear combination of higher-order signature terms. More precisely, the product of the signature terms corresponding to words $I$ and $J$ is the sum of all signature terms indexed by words $K$ of length $|I|+|J|$ obtained by interleaving $I$ and $J$. In the context of the discussion on page 1 this means that all moments of the signature can be written as linear combinations of higher order expected signature terms. A word $I=(i\_1, \ldots, i\_n)$ with $i\_1, \ldots, i\_n\in\\{1, \ldots, d\\}$ is a multi-index used to denote an entry of the signature, i.e. a real-valued number. The length of the word, i.e. $|I| = n$, denotes to which level ($n$-dimensional tensor) of the signature the entry belongs. For example $S^I(\mathbb{X})\_{[0,T]}$ where $I=(1,2)$ denotes the $(1, 2)$-entry of the second level of the signature (a matrix). [...] **Questions For Authors** Yes, we will contribute public code in the following two ways: - An independent code repository containing methods to estimate expected signatures using both the simple empirical estimator and the martingale corrected estimator (compatible with `numpy` and `torch` arrays, using `iisignature` and `signatory` for signature computations). This directory will also contain the scripts needed to reproduce the option-pricing experiment of Section 3.2.2. - Forks of the original repositories for the other two experiments adding the option to use the martingale corrected estimator, which can be used to reproduce the results of Sections 3.2.1 and 3.2.3. We will refer to the public repositories in the final version of the paper. This information was redacted from the first submission in adherence with the conference's double blind review policy. --- Rebuttal Comment 1.1: Comment: I greatly appreciate your thorough and insightful comments. Your feedback has clarified all of my concerns, and, in light of discussions with other reviewers, I have raised my score accordingly. I look forward to the final version of the paper and the accompanying code repository.
Summary: The authors explore an interesting and young topic with in ML, that is, signature-based methods for ML. Signature methods have been quite useful as a sort of preprocessing stage in ML pipelines to synthesize long time-series data among other applications. The authors theoretically explore the expected signature and it's relation to the empirical expected signature providing a useful bridge between theory and practice. In this work they explore how to improve the consistency of such empirical estimators. Lastly, the explore a few relevant applications. ## Update after rebuttal I think this is a significant work that can establish future downstream applications making use of the promising tool of signatures from rough paths theory. I increased my score to 5 - strong accept. Claims And Evidence: To the best of my knowledge all the claims are supported, however, I am not a expert in rough path theory and only performed a quick review of the arguments and proofs. Methods And Evaluation Criteria: The authors explore several interesting and relevant experiments. The evaluation seems to be fair and illustrates the strength of the theory as they are able to reduce the variance of their estimator. Theoretical Claims: I performed a preliminary read through of the argument, however, I did not have the time nor expertise to extensively review the 40+ pages of appendices in detail. Experimental Designs Or Analyses: From what I read for all three time series experiments they seem sound. Supplementary Material: I briefly skimmed the appendices. Relation To Broader Scientific Literature: This work seems quite relevant to research which uses long time-series data. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strengths * Well written paper * Addresses an important area of research * Theoretical results could be quite important for future research in the coming years ## Weaknesses * No conclusion at the end to reiterate contributions * Paper is quite dense (to be expected) and the organization will good is not exemplary * I think [1] is a good example of illustrating complicated topics to the layman. * Numerous equations in the main paper are unnumbered which is bad practice for future citations. * The motivations for several key design decisions are either missing or not communicated clearly. Especially, for work of this type it is highly important that you make your key ideas as easy to understand as possible. * While the appendices are quite detailed and show important derivations, this results in the paper being extremely long (I understand, however, that this is the current trend in math-heavy ML research). [1] Morrill et al., *Neural Rough Differential Equations for Long Time Series*, ICML 2021 https://arxiv.org/pdf/2009.08295 Other Comments Or Suggestions: * The footnotes in page 6 are quite long particular footnote 6. Perhaps 6 could be moved to the appendix (or part of it). * I think highlighting key points and contributions would make the paper easier to read. * Likewise, a further discussion of motivation for design decisions in constructing the estimator would improve readability. Questions For Authors: 1. Does $S(\mathbb X)\_{[0,T]}$ denote $S^k(\mathbb X)_{[0,T]}$ as $k \to \infty$? This is my understanding but I don't see it stated clearly in the main text. 2. Why is the equation on line 285 unnumbered? It seems quite important to the paper. 3. What is the motivation for constructing the equation mentioned above, why do you choose this corrector? I have some thoughts, but this motivation should be included in the writing. N.B., I generally like this paper and enjoyed reading it; however, the lack of clarity and explanation of motivations prohibits me from giving this paper the highest rating. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the comments. We believe the reviewer well understood the main contributions of the paper, which they highlighted in "Summary". **Other Strengths And Weaknesses** - Conclusions were not added due to space constraints, in case of acceptance we shall include the following short conclusion reiterating the main contributions of the paper. "In this paper we established new estimation results for the expected signature, a model-free embedding for collections of data streams. Our consistency and asymptotic normality results bridge the gap between the theoretically "optimal" continuous-time expected signature and the empirical discrete-time estimator that can be computed from data. Moreover, we introduced a simple modification of such estimator with significantly better finite sample properties under the assumption of martingale observations. Our empirical results suggest the modified estimator might improve the performance of models employing expected signature computations even when the underlying data generating process is not necessarily a martingale." - Inspired by the provided reference [1], in case of acceptance, we shall include a simple diagram to visually illustrate the main contributions of the paper in the introduction (unfortunately we cannot upload the image in this rebuttal). - We generally followed the convention that all the equations that are cross-referenced in the paper are numbered and the others are not. - See answer to question 3. below. - The results in the appendix provide the detailed proofs of the results discussed in the main paper. These details though are not strictly necessary to understand the main contributions of this paper but we still believe a good mathematical proof should leave as little details unexplained as possible. This is why we would like to keep the level of detail presented in the appendix as is. **Other Comments Or Suggestions** - We will move the discussion contained in this footnote to the main text as it provides the main motivation for using the martingale correction term, see also the answer to question 3. below. - As discussed above, we will highlight key points and contributions further by adding an intuitive diagram and a short conclusion. - This is addressed in the answer to question 3. below. **Questions For Authors** - In our notation $S^k(\mathbb{X})\_{[0,T]} \in (\mathbb{R}^d)^{\otimes k}$ is the $k$-th level of the signature, i.e. $$S(\mathbb{X})\_{[0,T]} = (S^0(\mathbb{X})\_{[0,T]}, S^1(\mathbb{X})\_{[0,T]}, S^2(\mathbb{X})\_{[0,T]}, \ldots),$$ where the signature $S(\mathbb{X})\_{[0,T]}$ lives in the tensor algebra $T((\mathbb{R}^d)) = \oplus\_{i=0}^\infty (\mathbb{R}^d)^{\otimes k}$. We believe the reviewer was referring to the notation used in [1] where $S^k(\mathbb{X})\_{[0,T]}$ denotes the $k$-th level *truncated* signature, i.e. the mathematical object living in $T^k((\mathbb{R}^d))= \oplus\_{i=0}^k (\mathbb{R}^d)^{\otimes i}$, and hence $S(\mathbb{X})\_{[0,T]}$ is indeed retrieved by taking the limit as $k\rightarrow\infty$. - This equation was left unnumbered due to formatting reasons. We agree with the reviewer that this is quite an important equation and thus we will number it. - This corrector (aka control) was chosen based on a control variate argument: by adding an appropriately signed (and scaled) mean-zero control to the estimator we preserve the original estimator's bias while reducing its variance (the stronger the correlation between the control and the original estimator, the larger the reduction in variance). The form of the signature of a martingale yields a natural candidate for the choice of such control variate: by substituting the outermost Stratonovich integral with an Ito integral we obtain a mean-zero term which, by construction, should be highly correlated with the original estimator, thus leading to a significant reduction in variance. This is the content of footnote 6 on page 6. As this provides the main motivation for choosing the form of the corrected estimator we shall move the discussion contained therein to the main text. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. I still believe equations should be numbered even if they aren't cross referenced in the paper as future researchers may wish to reference them. I don't have any other concerns and I believe this is a good paper for ICML.
Summary: An empirical estimate of the expected signature of a stochastic process depends on the number of observed paths $N$ and the partition $\pi$ on which the paths are observed. This paper shows that under suitable conditions, the empirical estimate of the expected signature of a canonical geometric stochastic process is consistent and asymptotically normal in the double asymptotic limit of $N\rightarrow\infty$ and $|\pi|\rightarrow 0$. Brownian motion, fractional Brownian motion (fBM), a bidimensional continuous-time autoregressive process, and the price-variance dynamics of a Heston model are shown to meet the conditions for the empirical estimator to be consistent and all but fBM are shown to meet the conditions for the empirical estimator to be asymptotically normal. Furthermore, this paper shows that when the stochastic process is a martingale, you can introduce a zero-mean correction term which reduces the variance of the empirical estimator. The paper concludes by highlighting that the theoretical results deepen our understanding of existing machine learning approaches which use the expected signature and by demonstrating that the martingale correction leads to improved empirical performance, even for processes which are not martingale. ## Update after rebuttal The authors addressed my minor stylistic comments and concerns regarding applying the method to non-martingale processes. I am happy to recommend acceptance after the rebuttal. Claims And Evidence: The main claims of this paper are theoretical, and I discuss these and the provided proofs in more detail in the relevant section. The evidence provided to demonstrate the empirical benefits of the martingale correction is clear and convincing. However, the paper does not provide a theoretical or intuitive justification for the improved empirical performance observed on non-martingale processes. Methods And Evaluation Criteria: A good demonstration of the benefits of the martingale correction is the comparison of pricing path-dependent derivatives. However, the other two classes of problem considered both contain non-martingale processes. This means that the martingale correction introduces a bias into the empirical estimator. Although the martingale correction is shown to be beneficial on these datasets, the broader implications of applying the correction to non-martingale paths and in which situations it is safe to do so are not discussed in the paper. Theoretical Claims: The majority of this paper is dedicated to proving four theoretical claims. 1) Under suitable continuity conditions on $\mathbb{X}$, $S^{\mathbf{I}}(\mathbb{X}^{\pi_n})$ converging in probability to $S^{\mathbf{I}}(\mathbb{X})$ as $n\rightarrow\infty$ and $|\pi_n|\rightarrow0$ implies that it converges in $L^m$. 2) That if the sequence of paths $\\{\mathbb{X}^n, n\geq 1\\}$ is stationary and ergodic and you have a set of refining partitions as in (1), then the empirical estimator is consistent. If further the sequence $\\{\mathbb{X}^n, n\geq 1\\}$ is strongly mixing, then the empirical estimator is asymptotically normal. 3) That under suitable conditions, when $\mathbb{X}$ is a Gaussian process the empirical estimator is consistent. 4) If $\mathbb{X}$ is a martingale, then there exists a mean-zero correction term which reduces the variance of the empirical estimator, and can itself be estimated. I have thoroughly reviewed the proof of claim (1) and found no errors. However, due to time constraints, I was only able to briefly review the proofs of claims (2), (3), and (4). No apparent issues were identified during this cursory examination. These claims are a significant theoretical contribution, providing an important advancement to the existing literature. However, given that the paper is highly theoretical and 55 pages long, I am concerned that a conference with an eight-page limit does not offer a suitable venue for adequately presenting or rigorously reviewing these theoretical results. Experimental Designs Or Analyses: The design of all experiments seems sound. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: This paper directly extends the work of Ni et al. (2012) and Passeggeri et al. (2020), who studied the asymptotic behaviour of the expected signature empirical estimator for Brownian motion and fractional Brownian motion respectively. Additionally, this paper gives a theoretical framework for understanding and extending existing machine learning applications of the expected signature, such as those in Lemercier (2021) and Triggiano et al. (2024). Essential References Not Discussed: I am not aware of any missing essential references. Other Strengths And Weaknesses: I have no other strengths or weaknesses I wish to highlight. Other Comments Or Suggestions: ## Minor Comments - On line 37, it is not clear to me what the informational content of the signature is and why this decays exponentially, when the terms of the signature decay factorially. - On line 39, the signature was not introduced in rough path analysis. It dates back to `Iterated Integrals and Exponential Homomorphisms' by Chen (1954). - On line 88, you swap the order of $n,\pi$ and $\pi,n$. - The Theorems and their proofs in the Appendix are not well connected, with no references in the main body of the text to the location of the proofs. - The proofs can be difficult to follow. The presentation would benefit from short outlines of the proof strategy at the beginning of the relevant sections and subsequent headings to section the proof. For example, this could be used to highlight the areas of the proof of Theorem 2.8 which concern conditions i), ii), and iv). - On line 296, the abbreviation MCAR is not defined. - On line 827, a second by is accidentally included. Questions For Authors: I have three questions for the authors. 1) The structure of the proof of Theorem 2.8 under i), iii), and iv) suggests it may be possible to continue extending to $\alpha\in(1/(n+1),1/n]$ by assuming additional higher order bounds on the path increments. Is there a theoretical barrier to progressing to $\alpha\leq1/4$ or just progressively tedious algebra? 2) How should the martingale correction being empirically beneficial on non-martingale processes be interpreted? Is this a complete surprise or something that has a theoretical (or intuitive) explanation? 3) A common setting in application is the ability to continue increasing the number of samples $N$, but being limited by the sampling rate $|\pi|$. Is it possible to comment on the asymptotic behaviour of the empirical estimator as just $N\rightarrow \infty$? Would you expect the estimator to be inconsistent in this limit? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the comments. We believe the reviewer has well understood the main contributions of the paper, which they summarized in "Theoretical Claims". **Minor comments** - Line 37: typo, the informational content (i.e. the norm) of signature terms decays factorially. - Line 39: we agree this was poorly worded, we shall update the references. - Line 88: we will fix notation to be consistent. - We will add references to the proof locations in the main text (e.g. Proof of Theorem 2.8. See Appendix A.1.) - We will add short outlines to the proofs (e.g. the main idea in the proof of Theorem 2.8 is to show the sequence of signatures along $\pi\_n$ is a Cauchy sequence in $L^m$). We shall also add a better partitioning between sections of this proof that concern conditions $(i)$, $(ii)$, and $(iv)$. - Line 296: we will spell out Multivariate Continuous-time Autoregressive Process (MCAR). - Line 827: typo. **Questions For Authors** 1. Yes, there is a theoretical barrier to progressing to $\alpha\leq 1/4$. This arises when applying Lemma A.1 to bound the sum of $Z^{\mathcal{I}}\_{[s,t]} $ with $i\_2=4$. More precisely, let $\alpha\leq 1/4$ and assume we follow the same proof strategy of Theorem 2.8 up to line 794 under assumption $(iv)$. Then, when $\alpha\leq 1/4$, we cannot apply (19) to bound the sum of the $Z^{\mathcal{I}}\_{[s,t]}$'s with $i\_2=4$ and so this term needs to be treated separately in the same way the $i\_2=2$ and the $i\_2=3$ had to be treated separately when $\alpha\leq 1/2$ and $\alpha\leq 1/3$ respectively. In a similar spirit to the $i\_2=2, 3$ cases we can thus attempt to apply Lemma A.1 to bound the sum of the $Z^{\mathcal{I}}\_{[s,t]}$'s. In this case though, even if we impose a condition similar to (A$\beta$) and (A$\gamma$) to bound the first term by $|\pi\_n|^{\zeta -1}$ for some $\zeta>1$, the second term, i.e. the sum of the $\\|Z^\mathcal{I}\_{s,t}\\|^2$, will be bounded by $|\pi\_n|^{8\alpha - 1/2}$ which does not necessarily vanish when $\alpha \leq 1/4$. 2. When the data generating process is not a martingale the corrected estimator introduces an additional bias which, depending on how "far" the process $\mathbb{X}$ is from being a martingale, might offset the benefits of the variance reduction. When $\mathbb{X}$ is "close" to being a martingale though, the corrected estimator is still theoretically better (in terms of MSE) than the classic estimator. Moreover, in practical applications, the corrected estimator might turn out to be a more informative quantity than the classic estimator. Intuitively, one might expect this to be the case when the drift of the underlying process is not relevant for the downstream task. In cases where the underlying process cannot be assumed to be a martingale we thus suggest to treat the martingale correction as a data transformation applicable in the learning pipeline (a model hyper-parameter in a similar spirit to the add-time or the lead-lag transform in the signature context) whose usefulness may be empirically ascertained via cross-validation. This is discussed in Appendix E.1 (paragraph starting on line 2436). If the reviewer feels this is an important point to include in the main text we would be happy to move the relevant discussion to Section 2.2. 3. Fixing the sampling rate leads to an irreducible lower bound on the precision with which we can learn the expected signature of the continuous-time process (in a similar spirit to the Nyquist–Shannon sampling theorem). If $\pi$ is fixed (e.g. because of constraints on the sampling rate) and we let $N\rightarrow \infty$ the empirical expected signature $\hat{\phi}\_{\mathbf{I}}^{\Pi(N)}(T)$ where $\Pi(N) = \pi \cup (T+ \pi) \cup \dots \cup((N-1)T+ \pi) $ is a consistent (and asymptotically normal) estimator for the expected signature $\phi^\pi\_{\mathbf{I}}(T) = \mathbb{E}[S^\mathbf{I}(\mathbb{X}^\pi)\_{[0,T]}]$ of the *piecewise linear* process $\mathbb{X}^\pi$ under the condition that $\\{\mathbb{X}^{n, \pi}, n\geq 1\\}$ – or sufficiently $\\{\mathbb{X}^{n}, n\geq 1\\}$ – is stationary and ergodic (and strongly mixing). This follows directly from the application of Birkhoff's ergodic theorem given in A.2.1 (and the dependent CLT in A.2.2) with $\mathbb{X}$ replaced by $\mathbb{X}^\pi$. The estimator $\hat{\phi}\_{\mathbf{I}}^{\Pi(N)}(T)$ is thus asymptotically (in the $N\rightarrow\infty$ limit) biased for the expected signature $\phi(T) = \mathbb{E}[S(\mathbb{X})\_{[0,T]}]$ of the underlying stochastic process $\mathbb{X}$, with bias $\phi^\pi(T) - \phi(T)$ (this is the "irreducible error" due to $\pi$-sampling). The in-fill asymptotic $|\pi|\downarrow 0$ is crucial to show such error vanishes, i.e. the empirical expected signature is a consistent estimator for the expected signature $\phi(T)$. --- Rebuttal Comment 1.1: Comment: 1) Thank you for your clear response. 2) Thank you for this clarification. I feel that this point is not suitably addressed in the main body of the text or Appendix E.1. In particular, any modification could be considered a model configuration hyperparameter, but the theory seems to suggest that this modification should only be beneficial when the process is a martingale. The empirical results would carry more weight if some aspects of your response to my question were included in the paper. 3) I was actually interested in the case where $\\{\pi_n\\}_{n \in \mathbb{N}}$ is a sequence of partitions of $[0,T]$, where each $|\pi_i|\geq\delta>0$ and the cumulative join $\bigvee_i \pi_i$ becomes dense in $[0,T]$ as $n \to \infty$, in the sense that $|\bigvee_i \pi_i|\rightarrow 0$, as opposed to the case of having $\pi$ fixed. --- Reply to Comment 1.1.1: Comment: 2. In case of acceptance, we will add part of this response to section 2.2 clarifying the trade-off between variance and bias when the process is not a martingale (and thus providing theoretical justification for using the martingale correction when the underlying process is not necessarily a martingale). 3. Thank you for clarifying the comment. If we constrain the sampling rate to be at most $\\delta$, i.e. $\\min\_{[u,v]\\in\\pi}|v-u|\\geq\\delta$, then we cannot obtain a sequence of partitions $\\pi\_i$ such that $\\bigvee\_i\\pi_i$ is dense in $[0,T]$ (the best we can do is to have $\\bigvee\_i\\pi_i$ dense in $\\{0\\}\\cup[\\delta,T-\\delta]\\cup\\{T\\}$). If instead we constrain the mesh $|\\pi|=\\max\_{[u,v]\\in\\pi}|v-u|\\geq\\delta$ then each partition might have observations arbitrarily close to each other (which we believe is not the realistic scenario the reviewer is interested in). As a middle ground between these two cases let us consider the following example, where we constrain the sampling rate of each partition to be at most $\\delta$ except for the first and last observation points, so that we can obtain $\\bigvee\_i \\pi\_i$ to be a dense subset of $[0,T]$. This example shows that an irreducible error similar to the one discussed in the initial response to this comment might still arise despite $\\bigvee\_i\\pi_i$ being dense in $[0,T]$. *Example.* Consider a two-dimensional Brownian motion $\\mathbb{W}=\\{\\mathbf{W}\_t\\in\\mathbb{R}^2,t\\in[0,T]\\}$. Assume for simplicity that the sampling rate $\\delta$ divides $T$, i.e. $\\delta K=T$. Let $\\{\\pi\_n,n\\geq 0\\}$ be a sequence of partitions such that $$\\pi\_n=\\{0,\\theta\_n,\\theta\_n+\\delta,\\theta\_n+2\\delta,\\ldots,\\theta\_n+(K-1)\\delta,T\\},$$ for $\\theta\_n\\in(0,\\delta]$ with $\\{\\theta\_n,n\\geq0\\}$ dense in $(0,\\delta]$. For example one may take $$\\{\\theta\_n,n\\geq 0\\}=\\{\\delta,\\delta/2,\\delta/3,2\\delta/3,\\delta/4,3\\delta/4,\\ldots\\},$$ ensuring that $\\bigvee\_n\\pi\_n$ is dense in $[0,T]$. Let us consider the task of estimating $\\phi\_I(T)=\\mathbb{E}[S^I(\\mathbb{W})\_{[0,T]}]$ where $I=(1,1,2,2)$. We can show that $$ \\begin{align*} \\mathbb{E}[S^{I}(\\mathbb{W}^{\\pi\_n})\_{[0,T]}]&=\\frac{T^2}{8}-\\frac{T\\delta}{12}+\\frac{\\theta\_n(\\delta-\\theta\_n)}{6}\\\\ &\\leq\\frac{T^2}{8}-\\frac{T\\delta}{12}+\\frac{\\delta^2}{24}\\\\ &<\\frac{T^2}{8}=\\mathbb{E}[S^{I}(\\mathbb{W}^{\\pi\_n})\_{[0,T]}]=:\\phi\_I(T). \\end{align*} $$ For the first equality see (\*) below while for the last equality see [4], Theorem 3.1. Setting $$ \\hat{\\phi}^N\_I(T):=\\frac{1}{N}\\sum\_{n=1}^N S^{I}(\\mathbb{W}^{\\pi\_n})\_{[0,T]}, $$ for a set of independent observations $\\{\\mathbb{W}^{\\pi\_n},n\\geq1\\}$, we can hence deduce that for all $N\\geq1$, $$ \\begin{align*} \\mathbb{E}[\\hat{\\phi}^N\_I(T)]\\leq\\frac{T^2}{8}-\\frac{T\\delta}{12}+\\frac{\\delta^2}{24}<\\phi\_I(T). \\end{align*} $$ By applying the shuffle property of the signature and explictly computing the corresponding terms, we can show that $\\mathbb{E}[S^{I}(\\mathbb{W}^{\\pi\_n})^2\_{[0,T]}]$ is uniformly bounded and hence $\\{\\hat{\\phi}^N\_I(T),N\\geq1\\}$ is uniformly integrable. Under uniform integrability, convergence in distribution would imply convergence of means but since $$ \\begin{align*} \\mathbb{E}[\\hat{\\phi}^N\_I(T)]\\not\\rightarrow\\phi\_I(T),\\quad N\\rightarrow\\infty, \\end{align*} $$ we can deduce $$ \\hat{\\phi}^N\_I(T)\\not\\rightarrow\\phi\_I(T),\\quad N\\rightarrow\\infty, $$ in any sense (not even in distribution). ___ (\*) We first apply Chen's relation and the fact that Brownian motion has independent and stationary increments to deduce that $$ \\begin{align*} \\mathbb{E}[S(\\mathbb{W}^{\\pi\_n})\_{[0,T]}]&=\\mathbb{E}[S(\\mathbb{W}^{\\pi\_n})\_{[0,\\theta\_n]}\\otimes S(\\mathbb{W}^{\\pi\_n})\_{[\\theta\_n,T-\\delta+\\theta\_n]}\\otimes S(\\mathbb{W}^{\\pi\_n})\_{[T-\\delta+\\theta\_n,T]}]\\\\ &=\\mathbb{E}[\\exp\_{\\otimes}(\\mathbf{W}\_{0,\\theta\_n})]\\otimes\\mathbb{E}[S(\\mathbb{W}^{\\pi})\_{[0,T-\\delta]}]\\otimes\\mathbb{E}[\\exp\_{\\otimes}(\\mathbf{W}\_{0,\\delta-\\theta\_n})], \\end{align*} $$ where $\\pi$ is the partition corresponding to $\\theta=\\delta$. To obtain the $I=(1,1,2,2)$ term we expand the tensor product and plug in the analytic form of the expected signature of piecewise linear Brownian motion given in Theorem 3.2 of [4]. [4] Ni, H. (2012). The expected signature of a stochastic process. PhD thesis. University of Oxford. https://ora.ox.ac.uk/objects/uuid:e0b9e045-4c09-4cb7-ace9-46c4984f16f6
null
null
null
null
null
null
null
null
Reinforcement Learning Control of a Physical Robot Device for Assisted Human Walking without a Simulator
Accept (poster)
Summary: This paper develops an RL application for controlling soft wearable exosuit for human normative walk. Grounded in the motivation that this type of system lacks robust simulators or dynamic models, the paper approaches it from a model-free RL perspective. Furthermore, given the natural lack of data from this problem, the paper leverages the traditional approach of initializing a policy via Imitation Learning from pre-collected trajectories and then applies a RL fine-tune to optimize a notion of costs that encourages safe normative walk and reduced EMG efforts. Based on experiments conducted with five healthy patients, the paper claims that the proposed system improves personalized walking assistance. Claims And Evidence: - The RIIV “method” improves data quality for learning the exosuit walk controller. Evidence: The work brings as evidence the prediction error of the controllers learned with the proposed method and two other baselines (the “Direct” method and directly using sensor data). From the bar plots, it is clear that, for some of the control variables, the RIIV method presents better results. Nonetheless, there are some points of concern: first, the presented final training cost (Fig. 2, left) is roughly the same for both cases, which makes unclear why RIIV presents better prediction errors in most variables. Furthermore, none of the results present confidence intervals, so it is unclear if they are statistically significant here. - Online learning effectively adapted the initial offline policy and provided a personalized control for human-robot normative walk. Evidence: The presented evidence shows the effect of online RL on top of the offline learned policy, presenting gains in terms of reduced EMG effort and peak knee error across the five participants. While this supports the hypothesis of offline-to-online adaptation, it is still unclear if this controller really brings benefits to humans. To clarify that, I would expect a baseline with no control signal (like a “placebo” controller) and evaluate the performance measures for the patients. The goal is to understand what is the impact of the whole exosuit in the patients. Another important point is: humans also adapt themselves to improve those performance measures, so it is unclear if the raised gains are due to the adaptation of the RL controller OR if they are due to the humans adapting their locomotion. To explain away the latter hypothesis, I would expect a baseline that runs the offline policy for as long as the online phase takes but without any RL adaptation, and after that evaluates the performance measures. - Human and robot co-adapted to achieve normative walking with reduced EMG effort. Evidence: The paper provides additional plots presenting the timing and duration of the state/control variables and describes how this evidence supports this co-adaptation hypothesis. Honestly, the text was really unclear and hard to follow, so I believe the paper failed to communicate the findings for this claim. Furthermore, the plots in Figure 4 are hard to read. Lastly, the suggest baseline from the previous point is also important here: it is unclear if the reduced EMG effort is due to the adaptation of the human or the RL controller, or both. - Both cost terms in the objective are required to achieve stable human-robot normative walk. Evidence: The paper conduct experiments ablating each term of the objective function to support this claim. While I believe this is the right evidence, the paper struggles to correctly communicate the findings again:it brings to separate figures, one of each representing the ablation of one component. But the figures are separated, and the plot scales are different, which makes really hard to analyze them. Therefore, it is hard to evaluate such a claim. I would suggest to unify Figures 5 and 6, and for each plot, bring 3 different curves: the one with the full objective, and other two ablating each component of the objective. Methods And Evaluation Criteria: Yes. Since this is an application-driven paper, the evaluation happens directly in the final target application, and the evaluation criteria follows directly from the performance criteria expected from a walk assistance gadget. Theoretical Claims: The work is mostly driven by the application, so the work is more on the empirical side. The work does bring some theoretical analysis, but to the best of my knowledge this is largely inherited from previous work that analyses the RL algorithm properties and not exactly a direct contribution of this work. Experimental Designs Or Analyses: The general experimental design makes sense, although I believe the two baselines mentioned before are key to validate the impact of the proposed system. Besides that, It is not clear how challenging the presented setup is. The experiments are conducted in five healthy patients, in a stable walk of 1 m/s. The problem presents a state-space of 5 dimensions and action-space of 4 dimensions. Thus: 1) It is unclear if the locomotion gaits from these five patients would indeed require a meaningful adaptation (in other words, how different should we expect their locomotion patterns to be?); 2) The state/action spaces are comparable to small benchmarks in RL; while I understand the major challenge comes from the lack of dynamics model and limited data, the presence of expert demonstrations also considerably mitigates that and provides a good base policy for downstream RL. Supplementary Material: Yes. I analyzed the appendices. They require a bit more organization: Tables 5, 6, 7 are not centralized, while their captions are; Appendix I does not contain any discussion, so it is unclear what is its purpose. Relation To Broader Scientific Literature: I understand the main contribution of the paper as providing a practical RL system for exosuit control under a limited setup (stable 1 m/s walk for healthy humans), which seems to be the first of its kind. The work does bring a very informative Related Work on the area, which helps understand why this is different from exoskeleton control. However, I am not expert on this area and I could not really evaluate if there are other potential methods for exosuit control that should be contextualized to this work. Essential References Not Discussed: Please see my previous comment. Other Strengths And Weaknesses: Strengths: I believe the overall problem setting (exosuit control) is really interesting and the proposed methodology fits the problem very well given the lack of a simulator/dynamics model. Concerns: Besides the concerns previously highlighted, I struggled to find what is the practical impact of the proposed controller. In line with what was described in the Impact Statement, the method (as it is developed and evaluated) is limited to a very restricted locomotion pattern and focuses particularly on healthy patients. While I understand the work provides a proof of concept, I believe it is still very far from a real-world controller, as it would require more extensive evaluation in different locomotion patterns, comprising different speeds, terrains, etc. As an application paper, it would be interesting to describe which applications are the developed controller purposed for, given the content presented in this paper. Other Comments Or Suggestions: Typos: The work consistently mistakes open quotes. Questions For Authors: - How are the safety constraints and tolerance parameters selected? Tables 3, 4, and 5. - In Appendix G, are the described adjustments to address the OOD issue learned by the controller or is there a human, manual intervention? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # We thank the reviewer for thier thoughtful feedback, please check our [new results](https://www.dropbox.com/scl/fo/rgonc4oohtzgf87jqlq3y/AKOxgA5jW9PHt3NKRFLAPcw?rlkey=04igqadzdmyojb9y48zlds5gf&st=g4pizvbd&dl=0) >Q1 Fig.2 concern 1) DIRECT & RIIV are different methods resulting in different physical scales. Therefore, Fig2-left (the learning curve) should only be used to monitor learning convergence, but not for quantitative compassion of performance. For example, consider only 1 non-zero error in eqn(6) be the swing timing $t_c =0.1$, then in physical time, RIIV error=0.0198s and DIRECT error=0.0946s. Please note that, the other 4 plots to the right provides more details on compassions of different methods. 2) We have included a revised Fig 4 (now Fig 11). Confidence interval is presented for Fig4-left. Note that, the other plots in Fig. 4 are shown in log scale, which does not reflect physical values of std. We therefore have included a new Table 8 to report actual STDs for each variable. >Baseline and Isolate co-adapation 1) We kindly ask the reviewer to refer to Fig 3 and its caption where each participant has a dashed line indicating the baseline EMG effort during natural human walking without exosuit assistance. 2) For details on isolating effects of robot and robot adaptation, please refer to Reviewer tqYD Q1. >Fig 4 concern We have a new Fig 12. Plz review carefully, especially the caption on coadaptation. >Fig 5,6 concern As reviewer suggested, we include a new Fig 13. Since each term in the overall cost differ physically, so are their scales. The results reported here are based on actual values without any pre-/post- process. The purpose of Fig 13 is to show how learning with only a partial objective (EMG or kinematic error alone) fails to guide effective adaptation to achieve normative walking with reduced EMG effort. This is what ablation study is designed for. > Challenge of the study and Gait veriation Our problem presents unique and significant challenges that distinguish it from standard simulated tasks: 1. Unlike benchmark env based on rigid-body systems with reliable simulators, our system involves a soft exosuit interacting with a human, introducing complex, and time-delayed dynamics. No existing simulator can accurately model this interaction, making our simulator-free, real-world learning approach both necessary and significantly more challenging. 2. Benchmark tasks offer well-defined state/action spaces and objective functions. In contrast, our study tackles a first-of-its-kind real-time control problem, where defining meaningful states, actions, and cost functions requires substantial innovation—making problem formulation itself a core contribution of our work. 3. Regarding the gait variation, please refer to our detailed response in reviewer Q99a Q3. 4. We appreciate the reviewer’s recognition that expert demonstrations help address the data challenge and note that, to our knowledge, no prior work has applied RIIV or this offline-to-online pipeline in soft wearable robotics—making it a novel contribution of our study. >Complex tasks and real world application Please refer to our detailed response to reviewer prNk Q3 >Appendix I, centering issue and typos Figs in Appendix I are discussed in the main paper. Fig 5 and 6 in the ablation study, and Fig 7 in the co-adaptation. Typos will be corrected, thanks. >Safety Constraints selected 1. Table 5 presents biomechanics data that define the realistic range of normative walking patterns [Zhang et al., 2020b]. 2. Based on this reference, the control safety constraints in Table 3 were established. In particular, we account for the actuator’s deflation delay that too long duration may raise safety concern so we cap it to 20% gait phases. The target states used in our controller correspond to the most commonly observed values within normative walking. 3. Table 4 outlines constraints in the RL framework to ensure participants maintain normative walking and are not exposed to unsafe or misleading actuator behavior. >human, manual intervention of OOD There was no human intervention or cherry-picking throughout the study—all results reflect true experimental data from all five participants. The adaptations were learned automatically by the RL controller during online training. In the offline phase, the exosuit was inactive, and data were collected from unassisted walking, meaning no actuator-induced effects were present. As shown in Figure 9A–B, activating the actuators led all participants to adjust their gait, introducing a distribution shift and potential OOD issue. Through online learning, the RL policy adapted to this shift as part of a natural human-exosuit co-adaptation process, without manual tweaking. As demonstrated in Fig 12, this adaptation was crucial for real actuator behavior and achieving normative gait patterns with reduced muscular effort. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. In light of the new evidence, I decided to increase my score from 2 to 3. --- Reply to Comment 1.1.1: Comment: Thank you! and we appreciate your evaluation and the opportunity to improve our work based on your feedback.
Summary: This paper presents an innovative approach to controlling soft exosuits for assisted human walking using reinforcement learning (RL) without relying on a simulator. The authors propose an online Adaptation from an offline Imitating Expert Policy (AIP) approach that addresses key challenges in RL-based control of physical devices, including limited data, absence of simulators for human-robot interaction, computational constraints, and the need for personalized control while ensuring safety. The method first uses offline learning to mimic human expert actions through real walking demonstrations without robot assistance, then initializes online actor-critic learning to optimize personalized robot assistance. The authors demonstrate their framework on five human participants, showing consistent performance improvements in terms of reduced human effort while maintaining normative walking patterns. The paper also provides qualitative performance guarantees for their online RL method, including learning convergence, dynamic stability, and solution optimality. Claims And Evidence: The claims made in the submission are well-supported by evidence. The authors claim that their AIP approach can effectively control a soft exosuit to assist human walking without a simulator, which is demonstrated through experiments with five human participants. The claim that online adaptation can personalize the control for individual users is supported by the consistent reduction in EMG effort (muscle activity) across all participants after online training, despite initial variations. The paper also claims that their data-centric approach is more effective than algorithm-centric approaches for this application, which is supported by comparing their RIIV (reducing intra-person and inter-person variation) method with a benchmark direct normalization approach, showing improved action divergence and better alignment with ground truth data. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand. The authors use a combination of offline imitation learning and online reinforcement learning, specifically direct heuristic dynamic programming (dHDP), which is well-suited for the constraints of real-time control with physical devices. The evaluation criteria include stage cost, peak knee error, and EMG effort, which effectively capture both the quality of the walking assistance (normative walking) and the reduction in human effort. The authors also consider important practical aspects such as time to convergence of RL online learning and action divergence to measure offline policy optimality. The experimental design involving five human participants provides a reasonable sample size for demonstrating the effectiveness and generalizability of the approach. Theoretical Claims: The paper makes several theoretical claims about the properties of their learning process and control performance, including learning convergence, solution optimality, and control system stability. While the main paper provides a high-level overview of these claims, the authors reference detailed proofs in the appendices. The theoretical framework appears sound, building on established principles from reinforcement learning and control theory. The authors acknowledge the challenges of controlling soft inflatable actuators due to their nonlinear nature, material properties, and manufacturing variations, and provide a theoretical analysis that accounts for these factors. Experimental Designs Or Analyses: The experimental design is robust and well-executed. The authors test their approach on five human participants walking on a treadmill at a constant speed, with clear safety constraints to ensure participant comfort and safety. The experiments include both offline data collection using a Motion Capture system for ground truth and online learning using IMU sensors for real-time control. The analyses include quantitative comparisons of performance metrics before and after online learning, as well as ablation studies to understand the contribution of different components of the cost function. The results are presented clearly with appropriate statistical measures, and the authors do not selectively present results, which enhances the credibility of their findings. Supplementary Material: Yes, the paper references extensive supplementary material, including appendices with details on participant anthropometric data, IRB approval, the physical setup, safety constraints, experimental protocols, theoretical analyses Relation To Broader Scientific Literature: The paper effectively situates its contributions within the broader scientific literature on soft exosuit control, reinforcement learning for robot control, and offline-to-online RL approaches. The authors acknowledge prior work on human-in-the-loop optimization methods for exosuit control and highlight the limitations of these approaches in terms of requiring robust dynamical models and lacking online tuning capabilities. They also discuss recent advances in offline-to-online RL and note that most existing approaches rely on extensive simulations or vast amounts of offline data, which are not available in their application. The paper's data-centric approach represents a novel contribution to the field, addressing a gap in the literature on RL control of physical devices without simulators. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The paper addresses a significant practical challenge in controlling soft exosuits for human assistance, with potential real-world impact in rehabilitation and assistive technologies. - The approach is pragmatic, focusing on data quality rather than algorithm complexity, which is particularly valuable for applications with limited data and computational resources. - The experimental validation on five human participants demonstrates the robustness and generalizability of the approach. - The paper provides both theoretical guarantees and empirical evidence for the effectiveness of the proposed method. Weaknesses: - The paper could benefit from a more detailed comparison with state-of-the-art methods in exosuit control, beyond the ablation studies presented. - The long-term effects of the proposed control approach on human adaptation and learning are not explored, which would be valuable for understanding the practical implications of the technology. - The paper focuses primarily on level ground walking, and it's unclear how well the approach would generalize to more complex locomotion tasks such as stair climbing or uneven terrain navigation. - The sample size of five participants, while reasonable for a proof-of-concept, may not be sufficient to capture the full range of human variability in gait patterns and responses to exosuit assistance. Other Comments Or Suggestions: Other Comments Or Suggestions - The paper would benefit from a more detailed discussion of the limitations of the approach and potential directions for future work. - A more comprehensive analysis of the computational requirements and real-time performance of the proposed method would be valuable for understanding its practical applicability. - The authors could consider exploring the potential of transfer learning between participants to further reduce the online learning time for new users. - The paper mentions that the approach could be applied to other wearable robotics applications, but does not elaborate on how the method might be adapted for different types of devices or assistance tasks. Questions For Authors: - How does the computational complexity of your online learning approach compare to other RL methods, and what are the implications for real-time control on embedded systems with limited resources? - Your experiments focus on level ground walking at a constant speed. How do you envision extending your approach to more complex locomotion tasks or varying walking speeds, and what additional challenges might arise? - The paper demonstrates co-adaptation between the human and the robot during online learning. Have you observed any longer-term adaptation effects in human participants after extended use of the exosuit with your control approach, and how might these effects influence the design of adaptive controllers for wearable robots? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: # We thank the reviewer for thier thoughtful feedback, please check our [new results](https://www.dropbox.com/scl/fo/rgonc4oohtzgf87jqlq3y/AKOxgA5jW9PHt3NKRFLAPcw?rlkey=04igqadzdmyojb9y48zlds5gf&st=g4pizvbd&dl=0) >Q1 Compare with sota method We thank the reviewer for the suggestion: 1. We have added Table 9, which summarizes related work that uses EMG as a performance metric. Our RL-based method achieves an average EMG reduction of 20%, significantly outperforming rule-based methods such as the one reported by Sridar, which achieved only 7.37% reduction. 2. The model based method and Bayesian optimization required probabilistic models or approxiamation of the objective which is not suitable for our problem. 3. Please refer to Reviewer Q99a Q1 for further detailed explainations. >Q2 Long Term effects As the application is the first study of its kind, long term study falls outside the scope of this study. However: 1. A potential challenge for long-term deployment is actuator timing drift due to wear and tear. In our study, all five participants used the same exosuit hardware across sessions, so any natural degradation—such as material fatigue or delay shifts—was implicitly captured during learning. In such effects occurred, AIP would have adapted to them as part of the online training process 2. As shown in the newly added Fig 9 C), the AIP policy consistently maintains reduced EMG effort across all evaluation sessions with minimal variation. 3. Please refer to Reviewer Q99a Q2 for related comments. >Q3 Generalize and challenges of more complex tasks 1. As the first study of its kind, our work focuses on level-ground walking at a constant speed—a standard and widely adopted practice in the wearable robotics community. As shown in Table 9 and 10, related studies have also primarily evaluated their methods under similar conditions. This controlled and repeatable environment serves as a foundational step for developing and validating the proposed AIP method before extending it to more complex locomotion tasks. 2. Regarding generalization of AIP, we introduced new results involving treadmill walking on a 7-degree incline, where the AIP method continues to perform robustly. Please refer to Reviewer tqYD, Q4 for related comments. 3. Real-world locomotion includes diverse tasks such as stair climbing and uneven terrain. While the core AIP principle remains applicable, these scenarios introduce challenges such as greater variability in human motion, the need for terrain-aware sensing, and potentially longer training times due to increased dynamic complexity. >Q4 Apply to other applications 1. The formulation of the state, action, and cost need to be adapted for different tasks. 2. For RIIV, captures the invariant feature of application and can potential inprove sample efficency. In other human-robot interactive systems, the principles of RIIV should still apply to identifying invariant task-related characteristics. 3. To extract the offline policy, we used a motion capture system commonly adopted in human-robot locomotion studies. >Q5 Gait pattern varibility Please refer to Reviewer Q99a Q3 for related comments. >Q6 Limitation and Transfer learning 1. Our method currently controls a single leg. Extending to bilateral control introduces challenges in coordinating and synchronizing assistance across limbs. Multi-agent RL offers a promising direction to address this open question. 2. Data efficiency is a known limitation of RL. We agree that transfer learning is a promising direction and we are actively exploring this approach, as it can accelerate convergence and reduce user-specific data needs. >Q7 Computational Complexity 1. The computational complexity of our online learning method is similar to that of DDPG, as both use the vanilla deterministic policy gradient framework. Our actor and critic networks have two fully connected layers with 256 hidden neurons, and a small batch size of 5 is used during online training. Based on the complexity estimate $O( Batch * layer * neuron^2)$, the method remains lightweight and efficient. 2. RIIV is a data-centric method with low computational complexity of $O(state)$, designed to minimize unnecessary input variance. This alignment improves data efficiency (Fig 2) and supports stable learning with compact networks and small batch sizes—without compromising performance. 3. For real-time deployment, we use a Raspberry Pi 5 microcontroller, with power consumption ranging from 2.7 to 12 watts. It has consistently supported both control and inference in real time across all five participants, with no observed performance issues.
Summary: This paper presents an RL-based control framework for a soft exosuit that assists human walking without a simulator. The proposed Adaptation from an offline Imitating Expert Policy (AIP) approach learns from human walking demonstrations and refines control using dHDP. AIP prioritizes data quality over large-scale simulations, enabling real-time personalized assistance. Tested on five participants, the method reduces muscle effort while maintaining normative walking patterns, demonstrating learning convergence, stability, and optimality. ## update after rebuttal Thank you for the thoughtful rebuttal. I appreciate the effort to address my concerns. I will maintain my original score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Section F. dHDP solution and properties in the supplementary material. Experimental Designs Or Analyses: Yes. The experimental designs are reasonable, however, it would be stronger if direct comparisons to alternative methods were provided, such as human-in-the-loop optimization, traditional rule-based control strategies, or a simpler heuristic baseline. Also, the experiment setup is simple now, it would be beneficial to explore how AIP adapts over extended use. Supplementary Material: Yes. I skimmed the supplementary material. Relation To Broader Scientific Literature: This paper builds on prior work in wearable robotics, RL, and human-in-the-loop optimization, addressing the unique control challenges of soft exosuits. Prior RL-based exosuit control relied on sim-to-real transfer, whereas this paper eliminates the need for a simulator by combining offline imitation learning with online adaptation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. Simulator-free RL for exosuit control and offline-to-online. The proposed AIP framework eliminates the need for a simulator, addressing a major data limitation in prior RL-based exosuit control methods. The combination of offline imitation learning and online RL adaptation enables personalized exosuit assistance. 2. Data-centric approach. This paper introduces Reducing Intra- and Inter-Person Variation (RIIV) to improve data quality for imitation learning. 3. Convincing results. The experimental results demonstrate reduced muscle effort for human participants and fast adaptation. Weaknesses: 1. Unclear long-term adaptation**.** The paper does not explore whether AIP remains stable over prolonged use or how well it generalizes to real-world walking conditions beyond a treadmill setup. 2. Limited participant diversity – The study only includes five participants, primarily young, fit females and heavier males, which limits the generalizability of the findings. Older individuals, lighter males, and a broader range of body compositions could exhibit different gait patterns and adaptation responses, affecting the model’s robustness. Other Comments Or Suggestions: N/A Questions For Authors: 1. What are the computational requirements for the real-time deployment? Could the method work on low-power edge devices? 2. How stable is the method over multiple days of use? Does the learned policy continue improving over time, or does it plateau? 3. What are the biggest challenges in scaling your method to a larger, more diverse population? 4. Did you collect any subjective feedback from participants regarding their comfort, ease of movement, or perceived assistance from the exosuit? If so, what were their suggestions for improvement? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # We thank the reviewer for thier thoughtful feedback, please check our [new results](https://www.dropbox.com/scl/fo/rgonc4oohtzgf87jqlq3y/AKOxgA5jW9PHt3NKRFLAPcw?rlkey=04igqadzdmyojb9y48zlds5gf&st=g4pizvbd&dl=0) >Q1 Compare with other methods We thank the reviewer for the suggestion. In response, we have added a comparison Table 9 summarizing related work that uses EMG as a performance metric: 1. There is no simple intuitive heuristic control and as shown in the Table 9, rule-based methods consistently underperform compared to automatic control approaches. This is because they rely on expert-defined heuristics and lack adaptability to individual variations. The only known method for knee assistance, proposed by Sridar et al., requires manual tuning and achieves only a modest 7.37% EMG reduction. In contrast, our AIP method achieves an average 20% reduction in EMG, requires significantly less time, and learns automatically through reinforcement learning. 2. Some related approaches, such as Bayesian optimization (BO), rely on probabilistic models or approximations of the objective function. These introduce early approximation errors and struggle to extrapolate effective policies from the limited state-action data available in our study. Furthermore, model-based methods are generally impractical for soft exosuits due to the difficulty of accurately modeling their highly compliant and nonlinear dynamics. 3. This further motivates our use of RL, as it enables us to collect human data for offline learning and extract an initial policy that can be used to effectively initialize online adaptation. >Q2 Extended Use Issue 1. As this is the first application of its kind, long-term testing across multiple days or in uncontrolled, real-world environments falls outside the scope of this study—consistent with prior work in the field (see Tables 9 and 10). 2. However our experimental design offers insight into stability over extended use. Each participant completed six cycles of 10 minutes walking followed by 5 minutes rest, totaling 7.5 hours for all participants. Refer to newly added Fig 9C, the AIP policy consistently reduced EMG effort with minimal variation, suggesting stable performance post-training. 3. We expect AIP to maintain performance once converged. Potential timing drift due to actuator wear is inherently accounted for, as all five participants used the same hardware. Any degradation would have been captured during training. Moreover, our pneumatic actuators are easily replaceable, supporting routine maintenance and long-term deployment. 4. Generalization to real-world conditions aligns more with industrial-scale evaluation. While outside the current scope, we believe AIP’s model-free, data-driven approach can generalize effectively, provided additional training data is collected in those settings. >Q3 Participant and Gait Diversity 1. Our participant pool included both males (3) and females (2), with a weight distribution of 69 ± 11.6 kg and height of 1.63 ± 0.09 m, demonstrating equal or greater variability in key physical characteristics compared to related studies in Table 10. 2. As shown in the newly added Fig 10, we present a histogram illustrating the range of gait patterns observed in our study. Based on established tolerance bounds from prior bio literature, our participants’ gait patterns span a broad spectrum within normative walking ranges. This indicates that our study captures a wide range of commonly observed gait variations. 3. Additionally, since this is the first study of its kind—with no prior precedent for the proposed method—IRB approval was essential to ensure participant safety. Including individuals with extremely diverse body types could introduce safety concerns, such as improper fit or reduced actuator effectiveness. 4. To further examine gait diversity and robustness, we introduced new results involving treadmill walking on a 7-degree incline, where the AIP method continues to perform robustly. For details, please refer to our response to Reviewer tqYD, Q4. >Q4 Computational Requirements Yes, our method runs effectively on low-power edge devices. For real-time deployment, we use a Raspberry Pi 5 microcontroller, with power consumption ranging from 2.7 to 12 watts. For details, please refer to our response to Reviewer prNK, Q7. >Q5 Subject Feedback 1. During training, we asked participants every five steps, “Do you understand the soft suit’s behavior?” 2. Participants’ awareness improved gradually. Within the first 20 steps, they commonly responded, “I don’t understand” By 50 steps, they reported, “I kind of know” and by 100 steps, most expressed, “I am fully aware”. 3. There are two directions to improve this process. First, enhancing the sample efficiency could reduce the initial "confusion" phase. Second, adding real-time visual feedback on when and how assistance is being applied—could help participants understand suit’s behavior more quickly.
Summary: This paper presents AIP (Adaptation from an offline Imitating expert Policy) for controlling a soft inflatable exosuit to assist human walking without relying on a simulator. The approach first learns from human walking demonstrations (offline phase), then adapts this policy online to personalize assistance. The method is validated on five participants, demonstrating reduced muscle effort while maintaining normative walking patterns. ## update after rebuttal I acknowledge the authors' rebuttal and maintain my original assessment. Claims And Evidence: *Key Claim:* AIP enables effective control of soft exosuits without a simulator *Evidence:* The authors demonstrate successful implementation on five participants with consistent performance improvement. Their approach successfully adapts to individual differences without requiring a complex simulator - which would be difficult to develop for this application due to the unpredictable human-soft robot interaction dynamics. Methods And Evaluation Criteria: The offline-to-online approach with RIIV data processing addresses the key challenge of working without simulation in human-robot learning. Performance metrics comprehensively capture both technical performance (stage cost, convergence time) and human factors (EMG reduction, knee error). The participant pool of five individuals provides reasonable diversity for initial validation. This work would be a good reference for practical reinforcement learning for physical human-robot interaction, especially in scenarios where accurate simulation and large expert data collection is infeasible. Theoretical Claims: NA Experimental Designs Or Analyses: *Offline-to-online transition methodology:* The authors' approach of using a single participant's data for offline learning and then testing generalization to other participants is valid for demonstrating adaptability. The experimental protocol included appropriate warm-up periods and rest periods between sessions to minimize fatigue effects. *Baseline comparisons:* The comparison against previous reported EMG reductions (7.37%) provides context for their 20% reduction. However, a within-study baseline comparison showing human adaptation alone would have strengthened the experimental design by better isolating the benefits of the co-adaptation process. Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: **Strengths** - Addresses the critical challenge of developing controllers for soft exosuits without simulators, which are nearly impossible to create for these complex human-robot systems. - Demonstrates successful direct learning in a physical environment with human subjects, a significant achievement in reinforcement learning. - Shows robust personalization across participants with measurable benefits (20% EMG reduction), outperforming previous methods (~7.37% reduction reported in comparable studies). - The participant diversity (varying height, weight, gender) is appropriate and exceeds similar studies in the field. Includes supplementary videos demonstrating walking with the exosuit, strengthening the empirical validation. **Weaknesses** - The method relies on established RL algorithms (BC and dHDP) rather than introducing algorithmic innovations, though the paper's true contribution is in the practical implementation rather than novel algorithms. - The ablation study could provide deeper insight into the specific conditions under which simulator-free learning becomes advantageous versus when simulations might still be beneficial. - While the authors address population diversity adequately for this study, a more explicit roadmap for extending to more complex scenarios (variable walking speeds, different terrains) would strengthen future applications. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # We thank the reviewer for thier thoughtful feedback, please check our [new results](https://www.dropbox.com/scl/fo/rgonc4oohtzgf87jqlq3y/AKOxgA5jW9PHt3NKRFLAPcw?rlkey=04igqadzdmyojb9y48zlds5gf&st=g4pizvbd&dl=0) >Q1 Isolating the benefit of co-adaptation To address the issue of isolating the benefits of co-adaptation, please refer to Fig 4, 7, and the newly added results (Fig 9) from the same previous experiment data. 1. human-robot co-adaptation is coupled as the human brain sensorimotor system governs their behavior during robot assisted walking. Isolating human adaptation is not feasible for the current study. But human adaptation reflected by human walking behavior naturally emerges in response to robot learning control.   2. To isolate the effect of co-adaptation, the only thing we could do was to disable the robot’s adaptation by freezing the policy updates—this setup is shown in Fig 9. 3. For Fig 9, in row A (baseline), participants walked naturally and showed their natural gait pattern; in row C (after online training), the policy is already optimized, so no further human adaptation is needed. In both cases, we observe no sign of human adaptation. 4. In row B (learned offline policy), the Offline policy is not optimal and needs to be personalized. This is where we speculate that the participants may recognize the mismatch and attempt to adapt, but since there is no clear or consistent direction, the participants’ responses appeared without clear pattern or trend. For example, P1 shows delayed timing in $t_A$, while others do not exhibit this behavior. Similarly, P5 demonstrated a slightly earlier duration in $d_C$, which was not seen in other participants. 5. Evidence of co-adaptation during online learning was clear. Fig 4 and 7 further support this: humans slightly adjust gait duration to accommodate actuator inflation/deflation, while the RL controller adapts its onset timing. This coordinated adaptation is evident in the bar plots of Fig 4 (revised presentation is now Fig 12), showing significant changes in timing and duration between human gait kinematics and robot control. >Q2 paper’s true contribution is in the practical implementation rather than novel algorithms. Yes, the offline-to-online learning framework is well-established in the robotics community. Additionally, our proposed RIIV is not limited to the specific locomotion task presented in this study. The core principles of RIIV are broadly applicable to other human–robot locomotion scenarios (For example, our newly added experimental data involves walking on an inclined treadmill at 7 degrees, please refer to Fig 8). >Q3 when simulator-free learning becomes advantageous VS simulatior-based learning 1. We thank the reviewer for this excellent question. In our case, a simulator is not available and therefore, it is not feasible for us to quantitatively compare simulator-based and simulator-free approaches in this study. 2. However, to shed some light, we may consider minimum fidelity requirements for simulations. A simulator can be beneficial if it meets key fidelity criteria—such as accurately replicating system dynamics and kinematics. The feasibility and usefulness of simulation, however, depend heavily on the application. For example, a cart-pole task requires much simpler modeling than a human-in-the-loop soft exosuit system. 3. Even if a model were available to simulate our system’s dynamics, deploying it for real-time control would remain highly challenging as in the well-known sim-to-real gap in reinforcement learning 4. In contrast, our simulator-free approach directly engages with the real-world complexities without relying on approximations. However, it comes with its own challenges—particularly the need for careful problem formulation and data-centric design (e.g., our RIIV method) to ensure efficient and effective learning while capturing the invariant dynamics of the human-exosuit interaction. 5. Ultimately, the advantage of simulator-free learning lies in its realism and adaptability, while the benefit of simulation depends on the availability of a high-fidelity model. In our context, simulator-free learning is a practical and effective choice, and perhaps, the only choice now. >Q4 Roadmap for more complex scenarios 1. We agree and our current roadmap focuses on generalizing the method beyond basic treadmill simulated level-ground walking to include more complex and realistic tasks and scenarios, including walking at variable speeds, inclined and declined walking, among others. 2. As a first step of extension, we have begun developing/validating the approach on inclined walking. Please refer to the newly added Figure 8, which shows results from a 7-degree incline treadmill walking. As shown, increasing the incline significantly alters gait kinematics. However, as shown in Panel C, our AIP method remains robust, consistently reducing EMG effort below BASELINE levels across all participants.
null
null
null
null
null
null
Contrastive Visual Data Augmentation
Accept (poster)
Summary: This paper proposes a novel data augmentation technique aimed at improving the recognition capabilities of Large Multimodal Models (LMMs) on rare classes/concepts that are underrepresented in the training set. In particular, authors leverage text-to-image diffusion models to synthesize images of the rare concept, where the generation prompt is crafted via a contrastive technique focused on highlighting the differences between the rare concept and its similar, but common counterpart. Experiments demonstrate that if only a few (5-10) examples of the rare concept are available for fine-tuning, the proposed technique can significantly boost recognition performance over some simple baselines. ## update after rebuttal I thank the authors for clarifying my questions and concerns. I raised my score accordingly. Claims And Evidence: - The claim in the contributions list "CoDA is also the first widely successful method using text-to-image generation for visual data augmentation" in my opinion is too strong. T2I diffusion models have been used for data augmentation before (see ARMADA [1], a baseline from the paper, TTIDA [2] and a survey paper [3]) and "widely successful" is subjective. - Authors claim that CoDA "significantly improve data and compute efficiency compared to existing methods", and while I can get behind the data efficiency aspect (with the same number of real samples, CoDA achieves higher accuracy), compute efficiency is not demonstrated in any way in the paper. In fact, it appears that the technique requires significant effort, involving several steps of feature extraction using LLMs and frontier diffusion model-based generation. Comparative study on the cost of the technique is missing. [1] Jin, Xiaomeng, et al. "ARMADA: Attribute-Based Multimodal Data Augmentation." Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia. 2024. [2] Yin, Yuwei, et al. "Ttida: Controllable generative data augmentation via text-to-text and text-to-image models." arXiv preprint arXiv:2304.08821 (2023). [3] Alimisis, Panagiotis, et al. "Advances in diffusion models for image data augmentation: A review of methods, models, evaluation metrics and future research directions." Artificial Intelligence Review 58.4 (2025): 1-55. Methods And Evaluation Criteria: The proposed approach is sensible. Explicitly highlighting the differences between known concepts and a novel concept can be an efficient way to learn the characteristic attributes of the novel concept in a data-efficient way. One caveat I have with the methodology is that it is bottlenecked by the VLMs/LLMs used for extracting visual/textual features. If the concept is truly novel, it is possible that the defining attributes cannot be extracted this way (as the feature extractors haven't seen the concept during training either). In other words, the proposed technique only works in cases where the novel concept is defined by a unique/novel combination of already known attributes. However, for recognizing truly novel concepts, external knowledge may be required. Another concern I have is the current limitations of text-to-image models in correctly generating multiple attributes based on the input prompt, which is crucial for the technique. In particular, [1] shows that even SOTA proprietary T2I models fail to generate as low as 3 distinct attributes correctly in 50% of the cases, and the success probability approaches 0 for 7 attributes. Thus, I believe the granularity of how much contrastive information can be represented in the synthetic images is seriously limited by the current performance of T2I models. Finally, I am unsure about using CLIP for calculating Discriminability/Generability scores. In particular, CLIP may have the exact same bias as the LMM towards the well-represented concept and thus may more frequently associate attributes with known concepts simply due to training set bias. [1] Wu, Xindi, et al. "Conceptmix: A compositional image generation benchmark with controllable difficulty." arXiv preprint arXiv:2408.14339 (2024). Theoretical Claims: No proofs/theoretical claims. Experimental Designs Or Analyses: I have some concerns with respect to the experimental evaluation. First, the primary dataset, NovelSpecies, is very small. The results are reported over 64 datapoints, which casts some doubt on the statistical significance of the reported results. Second, the improvements are very inconsistent: in some experiments textual features alone are the best, in other cases visual, and in some cases the combination of both. In some cases the proposed contrastive feature extraction helps, in other cases it doesn't. I believe there has to be more thorough study on the role of the different components of the pipeline, and some guidelines how to select which arrangement to use. The large variance in results may also be a result of the small size of the dataset. Third, it would be necessary to add some naive baselines to gauge the effectiveness of the technique. What happens if we use few-shot prompting with or without augmented samples? What happens if we use few-shot prompting highlighting the differences in text format? Lastly, experiments on larger scale datasets are only performed using a single model (LLaVA-1.6), casting some doubt on the generality of the results. Supplementary Material: Yes, I have reviewed the entire supplementary material. Relation To Broader Scientific Literature: The paper is closely related to and advances the work proposed in [1] by performing the editing in a more targeted way: the edit highlights the features that are different in the novel class from a common class the model confuses it with. [1] Jin, Xiaomeng, et al. "ARMADA: Attribute-Based Multimodal Data Augmentation." Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia. 2024. Essential References Not Discussed: I believe the references are sufficient to understand the context and contributions. Other Strengths And Weaknesses: I believe that the core idea of the paper is original and sensible. The significance of the findings is somewhat diminished due to (1) limited dataset size, limited number of models, (2) lack of discussion on cost/compute and (3) lack of ablation studies on the different components of the method. The modularity of the approach and the potential to improve with the emergence of better models is a nice benefit. Moreover, adapting LMMs to novel concepts efficiently is an important direction giving strong motivation to the paper. I find the writing to be more or less clear. Other Comments Or Suggestions: I found the term "hyper-domain-specific" verbose and vague at the same time. How is it defined and how is it different from domain-specific? Questions For Authors: See my points under Other Strengths and Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are pleased that reviewer DV9x finds: * our **idea original and sensible** * our **technique novel** * our **references sufficient** * and the **modularity of our approach beneficial** We value your constructive comments and address them in detail: --- **Related Data Augmentation Methods** We want to highlight a key distinction between our **generation-based** and existing **editing-based** augmentation (ARMADA and all methods in [1]): While our method generates T2I augmented images from scratch, editing-based methods create augmented images by perturbing attributes of real images of other concepts (L94-105), leading to two drawbacks: - they require large amounts real images, which are rare for novel concepts. - augmented images often lack background and view variation. Experiments in Table 2 show that fine-tuning with such data yield very limited gains, on par with traditional editing-based augmentation methods like Flipping. Furthermore, while we aim to thoroughly compare with **published** related works such as ARMADA, at the time of submission, both TTIDA and [1] were **not published** in any known conference or journal. Nevertheless, we are happy to cite and thoroughly discuss both works in our camera ready version. --- **Can VLMs Describe Features of Truly Novel Concepts** While editing-based methods can only create augmented images by perturbing known attributes present in existing concepts, our generation-based approach can design any novel feature to describe novel concepts. Therefore, **this limitation only exists for editing-based methods**, while CoDA is able to create any novel feature that can be described with human language. Furthermore, Chiquier et al. [2] showed modern VLMs can generate high-quality novel descriptive features for rare species and even otherworldly concepts such as those in the KikiBouba Dataset. --- **Clarification on Compute Efficiency** We'd like to clarify: By "compute efficiency", we refer to **efficiency during fine-tuning** with augmented / real images. As described in L631-654, CoDA is inference-only and does not involve any expensive back-propagation. Thus, CoDA's compute cost is an order of magnitude less than the fine-tuning process. Our efficiency claim refers to: when comparing models fine-tuned with CoDA augmented images vs models fine-tuned with all real images or ARMADA images, models fine-tuned with CoDA images can achieve higher performance with lower fine-tuning cost. Table 2 results justify this: training on 6 images per concept of mixed Real+CoDA data outperforms training with 20 real images per concept (using over 2x the compute). We acknowledge the word 'compute efficiency' was ambiguous in the paper and will clarify it in the camera-ready version. --- **Can T2I Models Generate Multiple Attributes** This is an issue we encountered many times. Consistent with Conceptmix, we found older T2I models like SD-2.1 tend to fail at generating >3 attributes in a single image. However, newer T2I models not included in the benchmark, like SD-3.5, can reliably generate >5 concepts. Furthermore, our two-layer feature filtering (L171-209) and image filtering (L236-251) strategy is specifically designed to filter out any remaining generation failures. Our human evaluation in (Table 1) demonstrates that the vast majority of features (83.97%) were successfully generated: |Image Type|Feature Presense(%)|IAA(κ) |-|-|- |Real|92.51|0.87 |Synthetic|83.97|0.82 --- **CLIP Discriminability/Generability scores** Our intentional math formulation of Discriminability/Generability scores ensure that the proposed association bias of CLIP would actually **aid in filtering out non-discriminative features**, while having no impact on the relative ranking of generability scores. Due to word limits, please respond for a detailed breakdown. --- **Dataset Size and Generality of Results** We want to clarify: while the NovelSpecies dataset has **64 different species**, it actually contains **2240 annotated images**. Results on NovelSpecies is reported on val and test sets each containing **960 unique images** (L275-277). We tested **three models** on this dataset to show CoDA's generality (Table 3). To further prove the statistical significance of our NovelSpecies results, we ran 5 independent sets of experiments on 10 different settings of NovelSpecies with different random seeds: Results show remarkable consistency: **average standard deviation of scores is 0.0133** across 10 settings, with a max of 0.017. The **average coefficient of variation is 1.82%** across all settings, **far below the general 5% threshold** for statistical stability. |Statistic|Min|Max|Avg |-|-|-|- |Standard Deviation|0.0069|0.0170|0.0133 |Coefficient of Variation|0.0087|0.0249|0.0182 (Respond to see full score table) [1] Advances in diffusion models for image data augmentation. AI Review (2025) [2] Evolving Interpretable Visual Classifiers with Large Language Models. ECCV (2024) --- Rebuttal Comment 1.1: Comment: I thank the authors for clarifying some of my doubts and concerns. However, I am confused why CoDA would be "inference-only and does not involve any expensive backpropagation". In all experiments, the base model is fine-tuned on the synthetic data. I maintain that feature extraction and image generation using foundation models as well as SFT incurs non-trivial compute cost and it has to be carefully compared to the cost of other comparable techniques in order to substantiate any claims on compute-efficiency. Furthermore, my comments on naive baselines is not addressed. What if we use few-shot prompting techniques without fine-tuning? I am especially interested in how models perform by (1) simply adding a single contrasting example in the prompt to show the model the visual differences and (2) simply adding textual description of the visual differences in the prompt. These are computationally cheaper and would be interesting to see how much CoDA can improve over these sensible baselines. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer DV9x for their engagement and thoughtful feedback. Your comments have been helpful in refining our work. Below, we address your points regarding compute efficiency and naive baselines. --- ### **"CoDA is inference-only and does not involve any expensive back-propagation"** Here we would like to clarify a potential misunderstanding: **CoDA itself specifically refers to our synthetic data generation process**, which only includes the following components: - Feature extraction with VLM inference - Image generation with T2I model inference - Feature and image filtering with CLIP and VLM inference The fine-tuning process in our experiments is a downstream usage of CoDA generated synthetic data, not **part of the CoDA method** itself. --- ### **Further Clarifications regarding Efficiency** We **`do not claim`** that CoDA's synthetic data generation process itself is more efficient compared to other synthetic data generation processes (which is not true as crop/flip requires no compute at all). Instead, we claim that: fine-tuning with CoDA augmented images yields higher performance with lower fine-tuning cost compared to fine-tuning with images generated by other data augmentation techniques. This is justified by results from Table 2: training on 6 images per concept of mixed Real+CoDA data outperforms training with 20 real images per concept (which costs over 2x the compute). The reviewer mentions a third method of comparison: i.e. comparing the data generation cost + fine-tuning cost of CoDA vs that of other methods. In the paper, we also **`do not claim`** that this comparison would show CoDA to be more efficient. We recognize that the distinction between these three methods of comparison was not made abundantly clear in our original submission. Therefore, we will remove all claims regarding compute efficiency in our camera-ready version to avoid any misunderstandings. --- ### **Naive Baselines** The reviewer suggested two naive inference-time augmentation baselines: 1. adding a single pair of contrasting image example in the prompt 2. adding textual description of the concept differences in the prompt During the exploration phase of our work, we considered these baselines but identified **three major issues severely limiting their general applicability**: - Both methods require the target model to be able to accept interleaved image-text input. Traditional classifiers like ViT, which excel in single-image tasks (Table 3), cannot leverage these methods. - Both methods are only applicable for binary classification. In general multi-way classification with more than two choices, it is impossible to pre-determine which set of example images or textual description should be provided to the model. - Method 1 can only work for VLMs that have strong multi-image referring and reasoning abilities. However, most VLMs at the time were not capable enough, as shown by a binary classification experiment with LLaVA 1.6 32B, yielding near-random performance when given image examples: |Dataset|1-shot Acc(%)|3-shot Acc(%)|Random Acc(%) |-|-|-|- |NovelSpecies|48.6|50.7|50.0 |INaturalist|49.3|47.9|50.0 However, given the reviewer's suggestion and recent advancements in VLMs' multi-image reasoning ability, we ran the following binary classification experiments on the gpt-4-turbo-2024-04-09 model: |Dataset|0-shot Acc(%)|CoDA Image 1-shot Acc(%)|CoDA Text Features Acc(%)|CoDA Text Features + CoDA Image 1-shot Acc(%) |-|-|-|-|- |NovelSpecies|87.88|91.58|91.32|95.26 |iNaturalist|84.08|87.63|86.71|88.55 Results demonstrate notable improvements when using 1-shot CoDA augmented example images and/or CoDA text features during inference. This shows that **CoDA can indeed help improve model performance via inference-time augmentation techniques**, although this improvement is only observed for binary classification tasks with strong VLMs (due to limited time and significant compute costs, we did not try more settings). We will add this result to our camera-ready paper as it is an interesting finding although not within the main focus / claimed contributions of our work.
Summary: In this paper, authors proposes Contrastive Visual Data Augmentation (CoDA), a novel approach to improve LMMs' ability to recognize novel and easily confused visual concepts. CoDA extracts contrastive features between target concepts and the confusable counterparts, generating synthetic training data with test-to-image models to update LMM. This paper also contributes a novel dataset, NovelSpecies. The solid experimental results demonstrate the effectiveness of CoDA. Claims And Evidence: Yes, they are. Methods And Evaluation Criteria: Yes, the proposed method makes sense for the problem. Theoretical Claims: There are not theoretical claims in this paper. Experimental Designs Or Analyses: Yes, I did. Supplementary Material: Yes, I reviewed the supplementary material, including the visualization, data selection strategy, more experimental details and the prompt design. Relation To Broader Scientific Literature: 1. This paper studies the data augmentation strategy, which is extensively applied for model training. 2. A novel benchmark for current LMMs. Essential References Not Discussed: Authors have cited and discussed related works. Other Strengths And Weaknesses: Strengths: 1. The paper is well-written to follow. 2. The proposed data augmentation strategy is effective for novel concepts. 3. The NovelSpecies dataset is interesting. 4. Competitive performances. --- From my perspective, there are no obvious weaknesses of this paper. Other Comments Or Suggestions: There are some typos in the paper. 1. For cross reference in the paper, Fig.1, and Tab.1 should be shown as Fig. 1, and Tab. 1 or Figure 1 and Table 1, where there is a space between "." and the number. 2. More visualizations like Figure 3 can be presented in the Appendix. 3. I'm wondering whether up-to-date online large models (such as GPT-o3 mini) could recognize the novel concepts. (There is no requirement for this point during the rebuttal.) Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank Reviewer bmru for their insightful and encouraging comments. We are pleased that you found: * our paper **well-written to follow**, * our proposed **data augmentation strategy effective**, * our **method's performances competitive**, * and our new **NovelSpecies dataset interesting and novel**. We are also encouraged that you recognize that our **solid experimental results demonstrate the effectiveness of CoDA**. Please find your suggesion and additional feedback carefully addressed below: --- **Spacing Between Figure and Table Notation** Thank you very much for catching this formatting issue, your detailed efforts are greatly appreciated. We fully agree that correcting this notation throughout our work would improve the readibility and clarity of our work. Therefore, we will make sure to correct all such usage instances in our camera ready version. --- **More Visualizations like Figure 3 in Paper** We fully agree with the reviewer that visualizations like Figure 3 can help illustrate our method and provide straightforward comparisons with other related methods. Moreover, we currently plan to include an additional figure in our camera ready version to provide example of success and (rare) failure cases in our visual data generation process. We believe this will help readers better understand the abilities and potential limitations of our work and facilitate better method adaptation to more domains. --- **Whether Online Large Models can Recognize Novel Concepts** This is a very interesting question raised by the reviewer. We believe that indeed online models with access to up-to-date internet information may be better suited for recognizing novel concepts compared to static models. However, when qualitatively testing current online VLMs such as GPT4o and Claude 3.7 Sonnet, we find that they still fail to recognize most of the concepts in our NovelSpecies Dataset. We believe this is likely due to their reliance on textual retrieval and inability to retrieve the relevant information based on novel images. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal and my concerns are addressed. I keep my rating.
Summary: The paper proposes a data augmentation technique for the tuning of large multimodal models on unseen concepts that are in a way 'very close' to known concepts included in the training. The method thus aims at expanding the knowledge of an existing model, including new concepts when they are encountered. The augmentation strategy works on a contrastive learning basis, including text descriptions and generated images of the novel concepts. From this perspective, the paper has a novel approach to adapting (normally done via fine-tuning) models to new concepts that have not been seen during training. Next to the methodology, the authors present a new dataset of animal species, recently discovered, that can be used to effectively test novel concepts wrt the iNaturalistic dataset. Claims And Evidence: The authors claim that the proposed NovelSpecies dataset can be used to evaluate the ability to recognize novel concepts, which seems extensively demonstrated in the experimental section. Furthermore, a methodological claim that the method proposed can adapt existing models in an effficient and effective way, thus overcoming the ineffectiveness of methods based on fine-tuning due to the scarcity of data usable for model adaptation. However, comparisons to finetuning-based methods are not evident in the paper. This prevents from appreciating the efficiency and effectiveness contributions claimed. Methods And Evaluation Criteria: The method is described in a detailed way. It makes sense to use text-guided generative models to expand the data available for augmentation at test-time. The filtering of features also seems to stabilize the adaptation phase, avoiding that the model learns degenerate representations. Overall the method seems fine and is evaluated on several datasets using reasonable metrics. In my opinion, it is novel and provides concrete contributions to the state of the art. Theoretical Claims: I did not find theoretical claims or proofs to validate. The paper makes hypotheses and verifies them with a mostly empirical approach. Experimental Designs Or Analyses: The chosen datasets and the proposed one are solid choices to provide an extensive validation. The choice of the comparison methods could be extended with fine-tuning based approaches, as in the introduction they are claimed to be ineffcient and ineffective, two shortcomings that the proposed method addresses directly. The ablation with different backbones provides extra evidence that the method can be plugged in different architectures. Supplementary Material: I skimmed through the supplementary material - it contains supporting material and some details for reproducibility. Relation To Broader Scientific Literature: The paper seems well-placed in the current literature and addresses an important problem related to large multimodal models and their deployment in real scenarios. The paper relates, in my opinion, to the literature on test-time augmentation that is used to adapt models at test time. A coverage of this relation should be present in the paper, or the authors should motivate why not. Essential References Not Discussed: I do not think there are particular missing references. I think that discussing the relation of this paper with test-time augmentation methods (and cite for instance seminal papers on the topic) would make the placement within the literature more solid. Other Strengths And Weaknesses: The paper is clear, with potentially impactful contributions in the field. As a weakness, I would say that the efficiency/effectiveness benefits should be better highlighted, as this is key to provide impact. Other Comments Or Suggestions: When use listings (e.g. 1)..., 2)...) there should be no dot '.' after the parenthesis ')' and the item should not start with a capital letter - that's really confusing while reading. to be explicit "straightforwardly 1). Fine-tune text decoder on new textual" should be "straightforwardly 1) fine-tune text decoder on new textual" Questions For Authors: - How does the proposed approach relate to test-time augmentation methods? - what is the effectiveness/efficiency gain wrt fine-tuning adaptation? Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Responsible Research Practice (e.g., IRB, documentation, research ethics, participant consent)'] Ethical Review Concerns: The authors perform an evaluation involving humans evaluators - no statements regarding an approval of the ethics committee of the institution is included in the paper neither in the supplementary material. Questions arise on what data of the humans were recorded, how the tests were submitted to the annotators, and what questions were asked. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer pV1R for their detailed review. We are delighted you recognize that: * our **method is novel and provides concrete contributions to the state of the art**, * our **claims are extensively demonstrated**, * our **datasets are solid choices to provide extensive validation** We are also encouraged that you find our paper **well-placed in the current literature, addressing an important problem**. We have carefully considered your valuable feedback and suggestions for improvement, which we address in detail below: --- **Comparison to Fine-tuning Methods** We very much appreciate the reviewer bringing up this important point and would like to take this opportunity to clarify: While a main contribution of our work is providing a state-of-the-art visual data augmentation strategy, we leave downstream users the freedom to decide how to use our augmented synthetic visual data, whether it is model adaptation, few-shot learning, fine-tuning, or even pre-training. **In our experiments, we focus extensively on the fine-tuning use case as it is the most general and intuitive way to utilize our augmented visual data and demonstrate its general usefulness for improving different models.** Specifically, we compare **fine-tuning existing models with all real data, mixed real and synthetic data from baseline data augmentation strategies, and mixed real and synthetic data from our CoDA method**. Results in Table 2 and Table 3 show that basic fine-tuning with all real data or including synthetic data from baseline data augmentation strategies produces much lower performance compared to fine-tuning with mixed real and synthetic data from our CoDA method. (L330-420) For better reference, we paste the following results subset from Table 2 of our main paper to illustrate the **performance disparity of basic fine-tuning with all real data vs including one synthetic image per concept from baseline data augmentation strategy ARMADA and our CoDA method**: |Dataset| Basic Fine-tuning Acc(%) | ARMADA Acc(%) | CoDA Acc(%) | |---|---|---|---| | NovelSpecies | 61.2 | 60.7| 70.1| | SUN | 73.4 | 75.9| 83.4| | INaturalist | 49.2 | 60.1| 63.5| --- **Relation to Test-time Augmentation Methods** We thank the reviewer for pointing out the relations between our work and existing test-time augmentation techniques, a topic we extensively considered and explored during our project. As elaborated in our response to Issue #1 above, we position our work as providing a state-of-the-art visual data augmentation strategy, while allowing downstream users to decide whether to use our augmented data for pre-training, fine-tuning, or test-time augmentation. **To validate the usefulness of our augmented data, we currently focus on the fine-tuning scenario because:** - General applicability, as fine-tuning is generally applicable to LMMs as well as traditional classifiers. - Empirical effectiveness, as the difficulty of the visual novel/confusing concept recognition task limits the effectiveness of test-time augmentation methods. In fact, during our exploration stage, we ran multiple test-time few-shot generalization experiments on VLMs. Here we attach our **earlier results for 1-shot and 3-shot generalization of LLaVA 1.6 34b, under the extremely simplified binary classification setting**: |Dataset|1-shot Acc(%)|3-shot Acc(%)|Random Acc(%) |-|-|-|- |NovelSpecies|48.6|50.7|50.0 |INaturalist|49.3|47.9|50.0 The model achieves no better than random chance performance on the three datasets with few-shot learning. Further qualitative experiments on proprietary VLMs also showed little promise, which prompted us to move towards fine-tuning methods for better performance. However, we recognize that the reviewer's comment regarding test-time augmentation methods goes beyond basic few-shot learning. We have identified several **areas within multimodal test-time augmentation methods that may be promising given our CoDA method or relevant to the novel/confusing concept recognition task:** - Multimodal meta learning and fast adaptation methods such as Flamingo - Code-based multimodal reasoning methods including VisProg and ViperGPT - Multimodal reflection and self-critique methods including LLaVA-Critic, Critic-V, and VDebugger - RL-based inference time scaling methods such as Visual-RFT and VAGEN - Multimodal RAG methods including Wiki-LLaVA, UDKAG and UniRAG We will include discussions relating our method to these test-time augmentation strategies in the camera ready version. --- **Highlighting the Effectiveness/Efficiency Gains** We thank the reviewer for recognizing the substantial effectiveness / efficiency gains of CoDA demonstrated in the experimental section. To highlight these gains, **we will include an additional bar chart comparing performances of CoDA against other baseline methods.** We believe this visual comparison will help readers more easily grasp the effectiveness / efficiency gains of CoDA. --- Rebuttal Comment 1.1: Comment: I have read the response to my comments and also those to the comments of other reviewers. I thank the authors for their responses, which mostly clarify my questions and doubts. However, my observations regarding test-time adaptation is vaguely addressed. If the authors have extensively considered it in the preparation process of this research project, why it is not discussed? Also, if the authors claim that the proposed approach is more general than test-time augmentation methods and can be used for pre-training, fine-tuning, TTA, etc. how are they going to address and clarify this in the paper? I find the paper overall containing good contributions and possibly having some impact in the field. However, a few things ar eleft hanging, also in the rebuttal. --- Reply to Comment 1.1.1: Comment: We thank Reviewer pV1R for engaging in constructive discussion and recognizing our work's **good contributions and potential impacts to the field**. Below we address your follow-up questions in detail. --- ### **Scope Clarifications: CoDA is a Data Augmentation Method that does not include Model Updating or Inference** Our proposed technique **CoDA**, is a **`visual data augmentation method`** designed to generate high-quality synthetic data, which may potentially be used as input to downstream model updating and inference methods such as: - Fine-tuning - Pre-training - Test-time Augmentation (TTA) - Adapters CoDA itself is **not** a **`model updating / inference method`**, and we **do not claim** any novel contributions in model updating techniques. Therefore, CoDA should only be compared with other **`visual data augmentation methods`** (e.g. ARMADA) instead of other **`model updating / inference methods`** (e.g. adapters or TTA). We also **do not claim** that our method will be useful for all downstream model updating / inference methods, as it is impossible to exhaustively verify within a single paper. To demonstrate the usefulness of CoDA generated visual data, we run experiments on fine-tuning VLMs and Traditional Classifiers using CoDA generated images. We **do not claim** that CoDA is more general than TTA methods (they are not comparable), but rather that we chose fine-tuning over TTA to showcase CoDA's usefulness because we found fine-tuning to be more generally applicable for our specific task. (discussed below) --- ### **Considerations Regarding Test-time Augmentation** During our work's exploration stage, we considered two inference-time augmentation strategies: 1. adding few-shot image examples in the prompt 2. adding textual descriptions of the concept differences in the prompt However, there were three major issues limiting the general applicability of these TTA methods compared to fine-tuning-based model updating methods: - Both methods require the target model to be able to accept interleaved image-text input. Traditional classifiers like ViT, which excel in single-image tasks (Table 3), cannot leverage these methods. - Both methods are only applicable for binary classification. In general multi-way classification with more than two choices, it is impossible to pre-determine which set of example images or textual descriptions should be provided to the model. - Method 1 can only work for VLMs that have strong multi-image referring and reasoning abilities. However, many VLMs at the time were not capable enough, as shown by a binary classification experiment with LLaVA 1.6 32B, yielding near-random performance when given image examples: |Dataset|1-shot Acc(%)|3-shot Acc(%)|Random Acc(%) |-|-|-|- |NovelSpecies|48.6|50.7|50.0 |INaturalist|49.3|47.9|50.0 Given the reviewer's suggestion and recent advancements in VLMs' multi-image reasoning ability, we additionally tried the following binary classification experiments on the newer gpt-4-turbo-2024-04-09 model: |Dataset|0-shot Acc(%)|CoDA Image 1-shot Acc(%)|CoDA Text Features Acc(%)|CoDA Text Features + CoDA Image 1-shot Acc(%) |-|-|-|-|- |NovelSpecies|87.88|91.58|91.32|95.26 |iNaturalist|84.08|87.63|86.71|88.55 Results demonstrate notable improvements when using 1-shot CoDA augmented example images and/or CoDA text features during inference. This shows that CoDA can indeed help improve model performance via inference-time augmentation techniques, although this improvement is only observed for binary classification tasks with strong VLMs. We will add this result to our camera-ready paper as it is an interesting finding to motivate future work, althought not within the main focus or claimed contributions of this paper. The above results are only **additional explorations** intended to provide more insights into the reviewer's suggestion regarding consideration of TTA methods. We **do not claim** to have exhaustively studied all TTA methods for our task, and leave the task for future works. We welcome the reviewer to propose any additional methods that can be potentially helpful, and we will be happy to cite/discuss them in our camera-ready version. --- ### **Summary** In summary, CoDA is a visual data augmentation method, validated through fine-tuning-based experiments. We did not propose a new model updating / inference technique comparable to TTA, so our evaluation rightly focused on fine-tuning to measure CoDA’s impact. We will make this scope explicit in the paper and will acknowledge TTA as a promising future direction.
Summary: The current submission addresses a known issue in LMMs (Large Multimodal Models) that of recognizing novel or even confusing visual concepts, due to their reliance on pre-trained knowledge and their limited ability to capture subtle visual details. To this en,d the authors introduce CoDA (Contrastive visual data augmentation) that extracts key contrastive both textual and visual features that differentiate target concepts from concepts they are commonly confused with. Afterward, text-to-image generative models are used to create synthetic training data that highlight these distinctive features, automatically filtering these images for quality assurance. The authors evaluate CoDA on three datasets: iNaturalist, SUN, and a newly introduced dataset called NovelSpecies, consisting of recently discovered animal species guaranteed to be unseen by the LMMs. CoDA is shown to generalize well across proprietary LMMs (GPT4o-mini) and traditional classifiers (ViT) and makes non-trivial improvements in accuracy over state-of-the-art augmentation methods in all the tested scenarios. Claims And Evidence: Claims are well-supported by evidence, such as: * CoDA outperforms existing visual data augmentation methods demonstrated through comparative evaluations on three datasets * CoDA is more effective in dealing with novel concepts, proved by the experiments conducted on a novel, dedicated and challenging benchmark for this task * broad applicability to different models such as ViTs and proprietary LMMs (GPT4o-mini) * solid ablation studies that prove the effectiveness of contrastive feature extraction and augmented image filtering (human evaluation is also used) Methods And Evaluation Criteria: The evaluation criteria and benchmarks are appropriate. The authors evaluate on well-known benchmarks (iNaturalist, SUN) and a novel dataset (NovelSpecies) specifically designed for evaluating the recognition of novel concepts that are guaranteed to be outside the training data of any LMM (with a knowledge cutoff date). Theoretical Claims: No significant theoretical claims were made that required rigorous mathematical proofs. The paper is focused primarily on empirical and practical improvements to LMMs' visual recognition capabilities. Experimental Designs Or Analyses: The conducted experiments are sound and well-executed and include: 1. CoDA comparison against multiple baselines on three datasets 2. CoDA variations (textual-only, visual-only, both) 3. Test on different model architectures 4. Ablation studies to validate key components 5. Human evaluations to verify feature and image quality Supplementary Material: The supplementary material includes the code for these experiments, but I haven't properly assessed it or run it. Relation To Broader Scientific Literature: The authors position their work well, addressing the existing literature on few-shot image recognition, visual data augmentation, and large multimodal models. Essential References Not Discussed: The authors have cited a broad range of relevant literature, and I did not find any glaring omissions of essential works. Other Strengths And Weaknesses: Overall, the strengths of this paper far outweigh the weaknesses. Strengths: * The paper addresses a practical and important problem for LMMs in a novel way * I fairly enjoyed reading the submission, well-written, well-structured, good discussion * The creation of NovelSpecies dataset is a valuable contribution to the field for benchmarking novel concept recognition * The method is model-agnostic and can work with any LMM or text-to-image generative model Weaknesses: * As I understood it, the method requires identifying a "confusable concept" for each target concept (one at a time), which could be challenging in some domains where the confusion patterns are not clear. * Limited discussion on potential failure modes or limitations of synthetic image generation. * Always valuable to have a dedicated section to future work or potential improvements, problems are never sold and would be useful to other scientists to have more valuable insights on how this work could be carried out further. Other Comments Or Suggestions: * A more detailed analysis of the computational overhead of CoDA compared to other methods would be useful for practitioners considering adoption. * Overall, I have very few critiques on writing or execution (I did not catch any obvious typos). Questions For Authors: My questions related to CoDA's sensitivity to various factors, such as: 1. How sensitive is CoDA to the quality of the identified "confusable concept", impact on performance? 2. Performance changes to varying discriminability/generability thresholds? 3. Increasing the number of synthetic images does not necessarily improve performance. Is there a way to predict in advance how many synthetic images would be optimal for a given concept? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer pV1R for the insightful review. We are encouraged that you find: * **our claims well-supported by evidence**, * **our evaluation criteria and benchmarks appropriate**, * **our experiments sound and well-executed**. We are also glad that you **enjoyed reading our submission**, while finding it **well-written, well-structured, with good discussion**. Please find your suggestions carefully addressed below: --- **Discussion on Failure Modes of Synthetic Image Generation** We appreciate the reviewer for this constructive suggestion. **Given more space in the camera ready version, we will add the following section in the main paper to explain failure modes in synthetic image generation, along with image examples.** As with any neural generative approach, our method is inherently constrained by the underlying T2I model’s capacity to represent the real world. While we expect this phenomenon to be gradually diminished with the advent of newer and more powerful T2I generative models, as of current publication we can still observe occasional generation results of nonsensical or biologically implausible artifacts (e.g., two-headed snakes) in rare cases. Additionally, while CoDA emphasizes class-discriminative features, it does not explicitly control for other attributes in the generated images. As a result, there can be unintended biases—for example, the backgrounds in generated images for some classes can be quite similar. While we did not observe any significant impact of such biases to our novel and confusing concept recognition tasks, users adapting our method to different use cases should be aware of potential unintentional impacts. --- **Dedicated Future Works Section** We fully agree with the reviewer that problems are never solved and would be useful to provide more insights on how this work could be improved upon or carried out further. **Given additional space in the camera ready version, we will add the following future works section to our main paper:** While a main contribution of our work is providing a state-of-the-art visual data augmentation strategy, we leave the downstream innovation on how best to use our augmented visual data to improve models for future work. In our experiments, we focus on the fine-tuning use case as it is the most general and intuitive way to utilize our augmented visual data. Besides this, other conceivable potential use cases for our augmented data include model adaptation, test-time augmentation, visual RAG or even pre-training. The modularity of our method also invites other researchers to replace components of CoDA with superior models to achieve better performance. The NovelSpecies dataset, which will continue to be updated with new species every year, may also be used to evaluate future VLMs' novel concept recognition abilities. Finally, we also expect improved versions of T2I generation-based visual data augmentation techniques to eventually surpass CoDA in effectiveness and efficiency, potential improvements may include more robust image / feature filtering and more controllable text-conditioned image generation like multi-view synthesis. --- **Computational Overhead Comparison** We thank the reviewer for bringing up this point: While we have thoroughly discussed the computational cost of CoDA in our appendix A.3.(L629-654), it would be very helpful to additionally compare this cost with that of existing visual data augmentation baselines such as ARMADA, so practitioners may reference it when considering adoption. **We plan to provide this information in the camera ready version via an additional figure.** --- **Explanation of "Confusable Concept"** We are glad that the reviewer brings up this point, which we would like to clarify: While CoDA chooses a confusable concept for each target concept, this process is very general and simply based on model misrecognition. For example, when LMMs are tasked to classify a novel concept it has no previous knowledge of, the VLM will simply hallucinate and provide the closest confusable concept. CoDA then teaches the VLM the target concept by illustrating the visual differences between the target concept and its corresponding confusable concept. This effectively reduces the difficult and costly task of learning this novel concept from scratch to the much simpler task of learning the difference between the target concept and its confusable concept. The effective learning cost reduction will depend on the level of similarity between the target concept and its closest confusable concept. However, we believe that as VLMs become more knowledgeable, they will also become better at finding higher quality similar confusable concepts, and thus dramatically reducing the cost of learning to recognize novel concepts.
null
null
null
null
null
null
Modulated Diffusion: Accelerating Generative Modeling with Modulated Quantization
Accept (poster)
Summary: This work introduces MoDiff, a novel framework for accelerating diffusion models by combining modulated quantization and error compensation. It enhances existing techniques like caching and quantization, offering a more efficient approach without sacrificing generation quality. MoDiff reduces activation quantization from 8 bits to 3 bits in post-training quantization (PTQ) on datasets like CIFAR-10 and LSUN, with no performance loss. Claims And Evidence: N/A Methods And Evaluation Criteria: The models and datasets used in this paper are too old. For example, this paper's latest model LDM [1] was released in 2022 and most of experiments are based on CIFAR 32$\times$32. More advanced models (e.g., PixArt [2] or FLUX [3] ), metrics (e.g., Image Reward [4] or GenVal [5]), and datasets (e.g., DCI [6] or MJHQ [7]) should be involved. [1] Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion models[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 10684-10695. [2] Chen J, Yu J, Ge C, et al. Pixart- : Fast training of diffusion transformer for photorealistic text-to-image synthesis[J]. arXiv preprint arXiv:2310.00426, 2023. [3] https://github.com/black-forest-labs/flux [4] Xu J, Liu X, Wu Y, et al. Imagereward: Learning and evaluating human preferences for text-to-image generation[J]. Advances in Neural Information Processing Systems, 2024, 36. [5] Ghosh D, Hajishirzi H, Schmidt L. Geneval: An object-focused framework for evaluating text-to-image alignment[J]. Advances in Neural Information Processing Systems, 2024, 36. [6] Urbanek J, Bordes F, Astolfi P, et al. A picture is worth more than 77 text tokens: Evaluating clip-style models on dense captions[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 26700-26709. [7] Li D, Kamko A, Akhgari E, et al. Playground v2. 5: Three insights towards enhancing aesthetic quality in text-to-image generation[J]. arXiv preprint arXiv:2402.17245, 2024. Theoretical Claims: 1. The assumption in Eq. (29) requires proper bit-width choices, which are constrained by the tensor shape as illustrated in Eq. (78). I suggest the authors discuss the feasible choices for different linear operations in diffusion. 2. The inputs in Eq. (25) and Eq. (28) are not quantized. Does this mean that the authors use the full-precision denoising at $T$? Experimental Designs Or Analyses: 1. The superior results in the paper are mainly based on LCQ (i.e., dynamic channel-wise activation quantization). I believe that this scenario can not obtain any speedup and is never presented in any CNN quantization works. Moreover, the bit-width settings in this paper are a little bit wired (activation << weight). All these need to be further discussed. 2. w8a8 Q-Diffusion w/o MoDiff in some scenarios outperforms MoDiff, which needs to be further explained. 3. Ablation studies for only employing standard modulate quantization vs. baseline are missing. 4. Experiments related to Q-Diffusion may be not fair. For example, the implemental details in Sec. B are all inconsistent with Q-Diffusion. Supplementary Material: I have reviewed all the supplementary material. Relation To Broader Scientific Literature: This paper lacks a lot of advanced baselines, e.g., MixDQ and EfficientDM. It is better to compare and combine this work with them to show the contribution. [8] Zhao T, Ning X, Fang T, et al. Mixdq: Memory-efficient few-step text-to-image diffusion models with metric-decoupled mixed precision quantization[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024: 285-302. [9] He Y, Liu J, Wu W, et al. Efficientdm: Efficient quantization-aware fine-tuning of low-bit diffusion models[J]. arXiv preprint arXiv:2310.03270, 2023. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: The idea is novel and interesting. Weaknesses: Some parts of this paper are hard to read. For example, the expression in L237-260 and L324-329 is very confusing and difficult to understand. Other Comments Or Suggestions: Eq. (18) needs to be fixed. Questions For Authors: This paper currently focuses on diffusion with hundreds of denoising steps during inference. However, few-step diffusion models are widely used. Can this work achieve performance improvement when being applied to these models? As shown in Eq. (30) and (31), the errors are the same w/ and w/o error compensation for a 2-step diffusion model. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thanks for recognizing the novelty of our paper. We believe there are some misunderstandings about our implementation, and we will address your questions with the following experiments. **Methods And Evaluation Criteria: Q1** MoDiff focuses on quantization rather than the latest models or datasets. To clarify its generality, we evaluate MoDiff on Stable Diffusion with MS-COCO and DiT on ImageNet using public quantized checkpoints [1,2]. Results show that MoDiff generalizes well across architectures and datasets. | W/A | Baseline | MoDiff | |:-:|--:|--:| | 8/8 | 54.80 | 53.76 | | 8/6 | 200.26 | 54.74 | | 8/4 | 271.87 | 90.91 | Table 1: Stable Diffusion | W/A | Baseline | MoDiff | |:-:|--:|--:| | 8/8 | 54.80 | 53.76 | | 8/6 | 200.26 | 54.74 | | 8/4 | 271.87 | 90.91 | Table 2: DiT [1] Q-Diffusion. ICCV 2023. [2] PTQ4DiT. NeurIPS 2024. **Theoretical Claims: Q1** The corollary validates our assumptions and guides design; it's not a strict per-layer rule nor the tightest bound. We aim to extend this to mixed-precision in future work. **Theoretical Claims: Q2** We warm up by running the first timestep in full precision and then using low-bit activations. This adds negligible overhead. We apply warm-up to baselines for fairness, showing the performance gain is not from warm-up. | W/A | Baseline + Warmup | MoDiff | |:-:|--:|--:| | 8/8 | 4.19 | 4.21 | | 8/6 | 9.53 | 4.28 | | 8/4 | 299.96 | 28.19 | **Experimental Designs Or Analyses: Q1** First, MoDiff is quantization-method agnostic. We follow [1]'s channel-wise dynamic quantization (hardware-unfriendly, but illustrative), and also test Q-Diffusion and tensor-wise quantization (Table 1, 11), showing consistent FID improvements. Second, low-bit activation cases are rare due to the large quantization error they introduce. MoDiff breaks the limitations of existing methods by effectively reducing activation quantization error. Similar configurations are also used in prior works [2,3]. [1] DGQ. ICLR 2025. [2] PikeLPN. CVPR 2024. [3] WRPN. ICLR 2018. **Experimental Designs Or Analyses: Q2** The baseline does not necessarily outperform MoDiff since MoDiff (1) has smaller quantization errors and (2) has better performance on sFID. Specifically, MoDiff closely matches the full-precision model across all metrics compared to the baseline, meaning its output is nearly the same as the ground truth. Moreover, MoDiff outperforms the baseline in sFID. It is inaccurate to conclude that the baseline is better considering all metrics. **Experimental Designs Or Analyses: Q3** To address your concern, we show how error compensation improves sFID for CIFAR-10 with DDIM. | W/A | Baseline | w/o Compensation | w/ Compensation | |:-:|--:|--:|--:| | 8/6 | 4.61 | 4.41 | 4.40 | | 8/4 | 13.07 | 10.21 | 4.38 | | 8/3 | 33.97 | 25.42 | 4.38 | **Experimental Designs Or Analyses: Q4** Our comparison with Q-Diffusion is fair. We only make minimal changes and necessary adaptations to integrate our method. Specifically, we (1) apply modulated computation and remove bias for correctness, (2) use temporal differences instead of raw activations for calibration since they serve as inputs during sampling. The last implementation is (3) omitting block-wise quantization for training stability. To demonstrate that (3) does not account for MoDiff’s performance gains, we applied it to Q-Diffusion with W8A8 on CIFAR-10, yielding FID 4.76 (worse than baseline). Our implementation also reproduces Q-Diffusion results without modulation, confirming consistency. **Relation To Broader Scientific Literature: Q1** We exclude MixDQ due to a lack of public code. EfficientDM uses quantization-aware training, which differs from our post-training quantization focus. **Other Strengths And Weaknesses: Q1** Lines 237–260 explain how errors accumulate in modulated computation and how our compensation method addresses this. Specifically, $\hat a\_t$ represents the activation computed in the previous timestep, while $a\_t$ is the desired (unquantized) activation. Modulated computation reuses $A(\hat a\_t)$ instead of $A(a\_t)$, so computing temporal difference based on $a_t$ in the next step will introduce errors. Our compensation technique corrects this by computing temporal difference based on $\hat a_t$. Lines 324–329 emphasize that compensation is more critical in practice than theory suggests. In practice, diffusion models suffer from significant error accumulation as quantization errors build up layer by layer. The severity of error accumulation highlights the necessity of applying error compensation. **Other Comments Or Suggestions: Q1** Thanks for pointing out the missing right parenthesis. We will correct the typo in the revised version. **Questions For Authors: Q1** We run tensor-wise quantization on CIFAR-10 with 20-step DDIM. MoDiff consistently improves low-bit quality. | W/A | Baseline | MoDiff | |:-:|--:|--:| | 8/8 | 6.93 | 6.90 | | 8/6 | 20.28 | 6.75 | | 8/4 | 297.21 | 22.12 | --- Rebuttal Comment 1.1: Comment: Thanks for the reply. I still have the following concerns: 1. The results on Stable-Diffusion and DiT are not sufficiently convincing. I believe that including the more advanced models, datasets, and metrics, as I mentioned in the first round, is necessary to validate the applicability in practice. 2. The warm-up and quantized denosing require people to save two models (e.g., full-precision and quantized version), which will bring too much memory consumption and eliminate one of the important effects of quantization (i.e., reducing memory consumption). 3. Since there are no kernel implementations for fast $\texttt{high-bit weight} \times \texttt{low-bit activation}$ (they should have the same bit-width in most cases for real-time speedup in current processors), the paper’s approach lacks practicality (I believe this is not a theoretical paper). 4. The author does not answer my question, where I mentioned that for w8a8 in some datasets, instead of the most cases. I merely hope the author to analyze these special cases. 5. [The official code of MixDQ](https://github.com/A-suozhang/MixDQ) was made available online last year. Therefore, "lack of public code" is unreasonable. Moreover, I think EfficientDM should be included for comparison, since it is more efficient than the reconstruction-based PTQ baseline used in the paper. EfficientDM finetunes the LoRA branch, which brings even smaller training overhead than the adaptive rounding in the reconstruction. Thus, I think EfficientDM is very special, which does not train the model weights like the norm QAT, and should also be included. At least, I think the author should combine their methods with more advanced baselines. Q-Diffusion used in the paper is very slow, and thus hard to apply to large-scale models in practice. 5. The author does not include the results for the few-step diffusion models. I think 20 steps is a moderate-step scenario. I suggest the author use step-distilled models. Overall, the contribution of the paper in practice is insufficient, and I decide to further lower the score. --- Reply to Comment 1.1.1: Comment: Thank you for your detailed feedback. However, these are misunderstandings of our method and response. To solve the concerns, we will clarify the points of confusion and provide additional experiments. **Q1. Including the more advanced models, datasets, and metrics is necessary.** We would like to emphasize three key points: (1) our method is general, and we demonstrate its effectiveness on widely recognized models; (2) our comparisons are both up-to-date and fair; and (3) we conducted additional experiments on SDXL-Turbo, as suggested, to further address your concerns. To summarize, our method shows consistent improvement in all experiments and ablation studies, which already clearly validates the scientific merits of this work. Given the rapid development of generative models, it is neither feasible nor necessary to always chase the latest models and datasets. In this context, Stable Diffusion, DiT, and their associated datasets and metrics are still widely used in recent quantization works and remain representative and foundational [1,2,3,4]. Therefore, our comparisons on these architectures are fair. [1] PTQ4DiT. NeurIPS 2024. [2] BiDM. NeurIPS 2024. [3] StepbaQ. NeurIPS 2024. [4] DGQ. ICLR 2025. **Q2. The warm-up and quantized denoising require people to save two models.** This is a misunderstanding of our method. We save only one quantized model; full precision is used solely for activations in the first time step. For subsequent steps, the same model is used with modulated quantization applied. Equation (6) in our main paper clearly illustrates this core idea of modulated computing. Additionally, we conduct experiments showing that removing the warm-up phase can still significantly outperform the baselines. |Bits (W/A)|Baseline|MoDiff w/o Warm|MoDiff w/ Warm| |-|-|-|-| | 8/8| 4.19| 4.22|4.21| | 8/6 | 9.93| 4.25|4.00| | 8/4| 28.19| 31.22| 28.19| **Q3. There are no kernel implementations on this setting.** We would like to emphasize three points: (1) hardware is not the focus of our work; (2) MoDiff represents a leap toward enabling low-bit activations; and (3) similar experimental settings are commonly used in existing works. First, while MoDiff is not hardware-centric, our experimental design demonstrates its effectiveness. Moreover, lowering the weight bandwidth to match the suggested setting is feasible, as shown in our W4A4 results. Second, this research is forward-looking instead of being restricted by existing hardware. MoDiff is a leap toward low-bit activation. This line of research has the potential of eventually reducing to 1 bit, which further influences how future hardware should be designed (such as logic-based operations). Finally, similar settings are adopted in very recent works on this topic [1,2,3] to validate the effectiveness of their methods. [1] DGQ. ICLR 2025. [2] PikeLPN. CVPR 2024. [3] WRPN. ICLR 2018. **Q4. I mentioned that for w8a8 in some datasets, instead of the most cases.** The case you mention is W8A8 in Q-Diffusion, where the baseline achieves a slightly better FID than MoDiff (3.75 vs. 4.10), but MoDiff achieves a better sFID (4.49 vs. 4.39). First, these small differences do not conclusively indicate that Q-Diffusion has better generation quality, as MoDiff also outperforms in sFID. Second, MoDiff exhibits smaller quantization error, better aligning with the full-precision model (FID 4.24, sFID 4.41). As shown in Table 1 (W8A8 case) and Table 3 (W8A8 case) of [1], minor quantization error can slightly reduce FID, which aligns with our findings. In a nutshell, the results in W8A8 are reasonable. [1] Q-Diffusion. CVPR 2023. **Q5. Experiments on MixDQ and Few-step diffusion models.** To address your concern, we conducted additional experiments on 2,4,8-step SDXL-Turbo using MixDQ as the baseline. We generated 10,000 images to compute FID, and the results demonstrate that our method is compatible with both MixDQ and few-step diffusion models, further improving their performance. The implementation is available in the following anonymous GitHub repository: https://anonymous.4open.science/r/MixDQ-MoDiff-7C52/ 2-Step |W/A|MixDQ|MixDQ+MoDiff| |-|-|-| |8/8|46.48|**46.30**| |8/6|318.68|**193.17**| |8/4|304.77|**192.65**| 4-Step |W/A|MixDQ|MixDQ+MoDiff| |-|-|-| |8/8|44.29|**44.74**| |8/6|318.57|**191.59**| |8/4|325.68|**192.74**| 8-Step |W/A|MixDQ|MixDQ+MoDiff| |-|-|-| |8/8|44.61|**43.30**| |8/6|347.75|**210.38**| |8/4|348.75|**212.68**| **Q6. Combine with EfficientDM** First, MoDiff is a post-processing framework, and how to integrate it into the training process is unclear, which we consider as an important future work. Second, although EfficientDM is highly efficient, it alters model parameters, rendering it an unsuitable baseline for direct comparison with PTQ methods. As noted in previous studies (very recent ones) [1,2], EfficientDM is typically excluded when evaluating PTQ approaches. [1] DGQ. ICLR 2025. [2] StepbaQ. NeurIPS 2024.
Summary: The paper investigates the shortcomings of current acceleration methods for diffusion models, such as caching and quantization, which suffer from error accumulation and high approximation errors, and introduces MoDiff—a novel framework that accelerates diffusion models through modulated quantization combined with error compensation. The authors support their proposal with theoretical analyses that detail quantization errors and the benefits of the error compensation mechanism, and they validate their method through experiments on datasets like CIFAR-10, LSUN-Churches, and LSUN-Bedroom, demonstrating that activation precision can be reduced from 8 bits to as low as 3 bits without any performance degradation in a training-free manner. Claims And Evidence: Yes. Figure 1 offers an interesting analysis that clearly motivates the use of modulated quantization, while the experimental results robustly validate the authors’ claims by demonstrating that MoDiff works well on various diffusion weight and activation precision. Methods And Evaluation Criteria: Yes. The method part is well-motivated. Theoretical Claims: Yes. From my point of view, the proofs in section 4.2 and 4.3 are correct and easy to understand. Experimental Designs Or Analyses: The experiments are generally convincing, but they have minor issues. - All experiments were conducted on small-scale datasets such as CIFAR-10, LSUN-Churches, and LSUN-Bedroom, whereas it is standard practice to validate a research idea on larger datasets like ImageNet—especially since the proposed methods are training-free, making scalability feasible. - Additionally, section 5.3 should extend its evaluation to include more recent samplers, such as flow models, EDM, and DPM-solver, as relying solely on DDIM and DDPM is insufficient for the SOTA validation. - The ablation study on error compensation would also benefit from employing direct image quality metrics like FID, IS, or P&R instead of the L2 metric, which is less convincing for assessing image quality. - Table 1 contains an error in the bolding; in the 8/6 bits quantization setting, LCQ has a lower FID compared to LCQ+MoDiff, indicating an error in the highlighted results. Supplementary Material: Yes. I've checked the section C and D in the appendix. Relation To Broader Scientific Literature: This paper examines the limitations of current caching techniques and post-training quantization (PTQ) methods, and based on these insights, the authors introduce MoDiff. Notable related approaches include DeepCache: Accelerating Diffusion Models for Free (NeurIPS 2024) and Q-Diffusion: Quantizing Diffusion Models (ICCV2023). Essential References Not Discussed: There are some caching methods missing discussion. The latter two are about video caching but it shares some similar ideas to image caching. - DiTFastAttn: Attention Compression for Diffusion Transformer Models, Neurips2024 - FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality, ICLR2025 - Adaptive Caching for Faster Video DiTs, arxiv. Other Strengths And Weaknesses: please see above. Other Comments Or Suggestions: No any other suggestions. Questions For Authors: No any other questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for recognizing the novelty, effectiveness, and clarity of our paper. We are glad to address your questions. **Experimental Designs Or Analyses: Q1. All experiments were conducted on small-scale datasets such as CIFAR-10, LSUN-Churches, and LSUN-Bedroom, whereas it is standard practice to validate a research idea on larger datasets like ImageNet—especially since the proposed methods are training-free, making scalability feasible.** To address your concern, we conduct experiments following [1], using DiT on ImageNet 256×256 with tensor-wise dynamic quantization. The results demonstrate that our method consistently improves generation quality at low activation bit widths: | W/A | Baseline | MoDiff | |:-:|--:|--:| | 8/8 | 54.80 | **53.76** | | 8/6 | 200.26 | **54.74** | | 8/4 | 271.87 | **90.91** | [1] PTQ4DiT: Post-training Quantization for Diffusion Transformers. NeurIPS 2024. **Experimental Designs Or Analyses: Q2. Should extend its evaluation to include more recent samplers, such as flow models, EDM, and DPM-solver.** To address your concern, we perform tensor-wise quantization using the DPM-Solver-2 on CIFAR-10 with 20 sampling steps. Additionally, we conduct experiments with the PLMS solver using 50 steps on Stable Diffusion with MS-COCO-2014. In both cases, MoDiff consistently improves FID scores across different solvers, even with reduced sampling steps. | W/A | Baseline | MoDiff | |:-:|--:|--:| | 8/8 | 3.92 | **3.91** | | 8/6 | 10.82 | **3.92** | | 8/4 | 299.72 | **26.54** | Table1: DPM on CIFAR-10 | W/A | Baseline | MoDiff | |:-:|--:|--:| | 8/8 | 54.80 | **53.76** | | 8/6 | 200.26 | **54.74** | | 8/4 | 271.87 | **90.91** | Table 2: PLMS on MS-COCO **Experimental Designs Or Analyses: Q3. The ablation study on error compensation would also benefit from employing direct image quality metrics like FID, IS, or P\&R instead of the L2 metric, which is less convincing for assessing image quality.** To address your concern, we conduct experiments to show the sFID measurement with or without error compensation on CIFAR10 with DDIM. The results show that error compensation reduces the error accumulation in low bit activation cases. | W/A | Baseline | w/o Compensation | w/ Compensation | |:-:|--:|--:|--:| | 8/6 | 4.61 | 4.41 | **4.40** | | 8/4 | 13.07 | 10.21 | **4.38** | | 8/3 | 33.97 | 25.42 | **4.38** | **Experimental Designs Or Analyses: Q4. Table 1 contains an error in the bolding.** Thank you for pointing out the incorrect bold formatting. We will revise it in a new version. **Essential References Not Discussed: Q1. There are some caching methods missing discussion.** Thank you for mentioning the relevant papers. We will include them in the related works section of the revised version.
Summary: The author introduces MoDiff, a framework designed to accelerate generative modeling by addressing challenges in caching and post-training quantization (PTQ). MoDiff incorporates modulated quantization and error compensation to reduce quantization errors and mitigate error accumulation. Theoretical analysis supports its effectiveness, and experiments on CIFAR-10 and LSUN show that MoDiff enables PTQ to operate at as low as 3-bit activation quantization. Claims And Evidence: The submission claims that existing caching methods introduce significant computation errors, but leveraging temporal stability in activation patterns can mitigate these issues. This claim is supported by empirical observations and visualizations (Figure 1), which show that temporal differences in activations have a more stable and concentrated distribution, reducing outliers and error accumulation. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are aligned with the problem. MoDiff is tested on relevant benchmark datasets (CIFAR-10, LSUN) using widely accepted evaluation metrics like FID, IS, and SFID. The study also includes various quantization methods and bit-width settings for comprehensive comparison. Theoretical Claims: The theoretical claims in the submission are supported by mathematical analysis, particularly Theorems 4.2 and 4.3, which establish the relationship between quantization error, input magnitude, and error accumulation. The proofs suggest that MoDiff effectively reduces quantization error and mitigates error propagation over time. Experimental Designs Or Analyses: The experimental design and analysis are well-structured, assessing MoDiff's effectiveness across multiple datasets (CIFAR-10, LSUN) using standard evaluation metrics (IS, FID, sFID). The results show that MoDiff preserves generation quality while significantly reducing computational costs. Supplementary Material: I reviewed the supplementary material, with a particular focus on the Implementation Details and Ablation Study sections. Relation To Broader Scientific Literature: The proposed MoDiff framework builds on existing diffusion model quantization techniques and introduces modulated quantization with error compensation to reduce quantization error and mitigate error accumulation. Notably, MoDiff is orthogonal to existing quantization methods, meaning it can be integrated with approaches like Q-diffusion to further enhance performance. Essential References Not Discussed: MoDiff introduces a novel quantization approach that is closely related to existing post-training quantization methods like PTQD. PTQD addresses the challenges of quantization noise and its accumulation by introducing bias correction and variance schedule calibration. Similarly, MoDiff aims to mitigate quantization error and error accumulation. A direct comparison with PTQD would help clarify the advantages and distinctions of MoDiff. [1] PTQD: Accurate Post-Training Quantization for Diffusion Models. Other Strengths And Weaknesses: 1、The paper lacks experiments on larger models, such as Stable Diffusion, which would better demonstrate its scalability and real-world applicability. 2、MoDiff's use of modulated quantization and error compensation is highly restrictive, as it can only be applied to linear modules. This requires re-quantization and de-quantization at every linear module, necessitating a redesigned quantization computation graph. As a result, layers like batch normalization cannot be folded, significantly reducing its practical feasibility. Additionally, since MoDiff relies on dynamic quantization, its actual acceleration benefits remain questionable. 3、The performance of LCQ+MoDiff is suboptimal. In Table 1, the results for 4/8 and 4/6 bit quantization are worse than the original LCQ method. Moreover, channel-wise quantization is difficult to implement efficiently on real hardware. Even with tensor-wise quantization, Table 11 shows that LTQ+MoDiff underperforms compared to the original LTQ method at 8/8 and 4/8 bit precision. Other Comments Or Suggestions: There is a missing right parenthesis in Equation (18). Questions For Authors: The paper adopts a layer-wise reconstruction training approach, which differs from the traditional block-wise method. This raises the question of whether it results in a longer training duration. A comparison of training efficiency between these approaches would help clarify the potential trade-offs. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for recognizing the novelty, effectiveness, and clarity of our paper. We believe there are some misunderstandings about our implementation, and we are glad to address your questions. **Essential References Not Discussed: Q1. Compare with PTQD.** Compared to PTQD, MoDiff is (1) more general and flexible, (2) free from strong assumptions about error distribution, and (3) significantly more effective in low-precision scenarios. (1) PTQD requires solver-specific adaptations to address variance and bias, while MoDiff can be applied across solvers without modification. Moreover, PTQD is restricted to standard diffusion models, whereas MoDiff also supports cached diffusion models by compensating for reuse errors in cached components. (2) PTQD relies on strong assumptions about error distribution, specifically that quantization errors follow a Gaussian distribution after input rescaling. This assumption can introduce inaccuracies in error estimation. In contrast, MoDiff leverages the widely observed similarity between timesteps, which is well-supported by prior works [1]. (3) MoDiff performs well in low-precision activation settings, whereas PTQD fails entirely. To demonstrate this, we evaluate both methods on CIFAR-10 with W8A4 quantization. PTQD yields an FID of 397.12 and fails to produce meaningful images, while MoDiff achieves a much lower FID of 13.41. We will include these comparisons in the revision. [1] Deepcache: Accelerating diffusion models for free. CVPR 2024. **Other Strengths And Weaknesses: Q1. Lack experiments on large models (Stable Diffusion).** To address your concern, we conduct tensor-wise quantization on Stable Diffusion v1.4 using the 50-step PLMS solver on MS-COCO-2014. The resulting FID scores demonstrate that MoDiff consistently performs well on large-scale diffusion models. | W/A | Baseline | MoDiff | |:-:|--:|--:| | 8/8 | 54.80 | **53.76** | | 8/6 | 200.26 | **54.74** | | 8/4 | 271.87 | **90.91** | **Other Strengths And Weaknesses: Q2. MoDiff is restrictive, which does not support norm layer folding and depends on dynamic quantization.** There are a few misunderstandings about our paper. We clarify that MoDiff (1) supports norm layer folding and (2) does not depend on dynamic quantization. (1) MoDiff supports norm layer folding and is practical in use. It applies to any linear operation, not just linear layers. By folding norm layers with other linear components such as convolution layers, MoDiff can be applied to the resulting block due to preserved linearity. In our implementation, we did not perform such merging, following standard practices in the diffusion quantization community [1]. (2) MoDiff is agnostic to the quantization method and not limited to dynamic quantization. As shown in Table 1 of the main paper, it consistently improves performance in low-precision settings with Q-Diffusion, which is static quantization. Additionally, dynamic quantization is well-studied in the literature [2] and supported by certain hardware platforms [3]. [1] https://github.com/Xiuyu-Li/q-diffusion. [2] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models. ICML 2023. [3] NVIDIA TensorRT Documentation. https://docs.nvidia.com/deeplearning/tensorrt/latest/inference-library/work-quantized-types.html **Other Strengths And Weaknesses: Q3. Suboptimal results in some settings.** First, after carefully reviewing all log files, we identified some errors in writing the results for the $W8A8$ and $W4A8$ cases. The revised results are now provided. As shown in the updated table, MoDiff consistently matches both the baseline and full-precision models in terms of FID and sFID. | Model | FID | sFID | |:--|--:|--:| | LCQ 4/8 | 4.96 | 4.94 | | LCQ + MoDiff 4/8 | 4.95 | 4.95 | | LTQ 4/8 | 5.02 | 5.21 | | LTQ + MoDiff 4/8 | 5.05 | 5.16 | | LTQ 8/8 | 4.19 | 4.40 | | LTQ + MoDiff 8/8 | 4.21 | 4.37 | Second, our MoDiff is agnostic to quantization methods, where channel-wise dynamic quantization (we follow [1] and agree that it is hardware-unfriendly) is used to show the feasibility and potential of MoDiff. In addition to channel-wise dynamic quantization, we report the results on Q-Diffusion and tensor-wise quantization in Table 1 and 11 with consistent improvement over FID on low-bit activation cases. [1] DGQ: Distribution-Aware Group Quantization for Text-to-Image Diffusion Models. ICLR 2025. **Other Comments Or Suggestions: Q1. Missing right parenthesis in Eq (18).** Thank you for pointing it out. We will correct the typo in the revised version. **Questions For Authors: Q1. If MoDiff needs more training time.** MoDiff is even more efficient than Q-Diffusion in scaling factor calibration. As noted in our implementation, we skip block-wise quantization for training stability while maintaining the same number of iterations to learn scaling factors. This results in reduced time for post-training calibration.
Summary: This paper introduces Modulated Diffusion (MoDiff), an approach that combines caching and quantization techniques while addressing their limitations. By leveraging the differences in activations across diffusion timesteps for quantization and incorporating an error compensation mechanism, MoDiff effectively mitigates error accumulation. The method's effectiveness is validated through experiments on the CIFAR-10 and LSUN datasets. ## update after rebuttal I appreciate the authors' efforts of adding new baselines during the rebuttal, that certainly improved the paper. This paper is a borderline paper, However, as the authors didn't fully address the concern on the practical speed up, I tend to keep my original rating, as I find the description of the rating 2 "leaning towards reject, but could also be accepted" to best describe my judgement. Claims And Evidence: Yes, the claims are well-supported by evidence. Nevertheless, please see the "Questions" section for further questions on the quantization scheme. Methods And Evaluation Criteria: The evaluation and methodology design are generally appropriate for the study. Theoretical Claims: The claims are correct. - Theorem 4.2: Quantization error is related to the input range and the number of bits. The small range of temporal differences allows for low-bit quantization (Paragraphs 1-86, 1-97). - Theorem 4.3: Error compensation makes the error decay exponentially, avoiding linear accumulation (Paragraphs 1-98, 1-110) Experimental Designs Or Analyses: The experimental designs are generally appropriate for the study. However, comparison with existing literature are lacking. The paper would benefit from a more thorough discussion and comparison with existing works in the field. 1. Limited focus on U-Net-based architectures: While the paper primarily focuses on U-Net-based architectures, diffusion transformer-based architectures (e.g., DiT) have emerged as widely adopted alternatives. Including experiments or analysis on more recent architectures is recommended. 2. Narrow scope of related work: The paper’s focus on diffusion model caching and quantization. However, only earlier research (e.g., DeepCache, Q-Diffusion, published in early 2023) is discussed. More recent diffusion quantization literature should be discussed in the related work section. Additionally, an analysis of whether the proposed method is applicable to these techniques would strengthen the authors’ claim that "MoDiff is agnostic to quantization techniques." Supplementary Material: Yes, I check the appendices. Relation To Broader Scientific Literature: The idea of modulated quantization relates with other diffusion quantization, and caching techniques, such as: - Diffusion Quantization methods aim to reduce quantization error from alternative perspectives. [1] He, Yefei et al. “PTQD: Accurate Post-Training Quantization for Diffusion Models.” ArXiv abs/2305.10657 (2023): n. pag. [2] Zhao, Tianchen et al. “ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation.” ArXiv abs/2406.02540 (2024): n. Pag. - Diffusion Feature Caching techniques propose a similar idea of representing the residual of a timestep to reduce caching errors. [3] Chen, Pengtao et al. “Δ-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers.” ArXiv abs/2406.01125 (2024): n. Pag. [4] Zou, Chang et al. “Accelerating Diffusion Transformers with Dual Feature Caching.” ArXiv abs/2412.18911 (2024): n. pag. Essential References Not Discussed: Some recent related literature related with diffusion quantization and caching are not discussed and compared, refer to the "relation to broader scientific" section. Other Strengths And Weaknesses: **Other Strength** The paper provides theoretical analyses to support the claims. **Other Weakness** I encourage the authors to further discuss the novelty: The key idea of MoDiff aligns with existing research. Specifically, MoDiff proposes to separately quantize the timestep-wise difference and the original computation, leveraging the observation that the temporal difference distribution "has a smaller but consistent range." Similar concepts, such as approximating temporal differences and storing intermediate results for error compensation, have been explored in recent caching methods [1] to reduce caching errors. [1] Chen, Pengtao et al. “Δ-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers.” ArXiv abs/2406.01125 (2024): n. Pag. Other Comments Or Suggestions: No Questions For Authors: I have three questions on the practical efficiency improvements: 1. The primary contribution of MoDiff is achieving lower activation bitwidth (e.g., W8A4). However, such bitwidth reductions face challenges in delivering practical savings. On GPUs, activations need to be upcasted to 8-bit for INT8 computation. In most cases, the memory cost of activations is significantly smaller than that of weights, so the memory reduction from quantizing activations to lower bitwidths is limited. 2. As stated, “For activation quantization, dynamic channel-wise quantization determines the scaling factor based on the channel-wise min-max range of the input.” However, this channel-wise activation scheme can hinder actual hardware acceleration, as discussed in [1], because channels need to be summed together and should share the same quantization parameters to enable efficient integer computation. 3. Additionally, the evaluation is limited to earlier solvers (e.g., DDIM) with a notably large number of timesteps (100–400). It remains unclear whether MoDiff is applicable to more commonly adopted efficient solvers, such as DPMSolver. [1] Zhao, Tianchen et al. “ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation.” ArXiv abs/2406.02540 (2024): n. Pag Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Experimental Designs Or Analyses: Q1** Although the goal of this paper is to validate our method, not to benchmark it across all diffusion architectures, we conduct experiments following [1] to address your concern, using DiT on ImageNet. The results demonstrate that our method consistently improves generation quality: | W/A | Baseline | MoDiff | |:-:|--:|--:| | 8/8 | 54.80 | **53.76** | | 8/6 | 200.26 | **54.74** | | 8/4 | 271.87 | **90.91** | [1] PTQ4DiT: Post-training Quantization for Diffusion Transformers. NeurIPS 2024. **Experimental Designs Or Analyses: Q2** We will include the mentioned literature in the revised version. Regarding the papers you cited, MoDiff (1) offers orthogonal contributions and can complement quantization methods; (2) generalizes caching approaches; and (3) reduces quantization error better compared to PTQD. (1) ViDiT-Q is a quantization method that does not exploit time step similarities or compensate for quantization errors. MoDiff addresses these limitations to reduce quantization error and prevent error accumulation, which is orthogonal to ViDiT-Q. Moreover, MoDiff is a general framework compatible with ViDiT-Q's techniques, including group-wise quantization, channel balance, and mixed-precision computation. (2) $\Delta$-ViT and dual feature caching rely on empirical, heuristic designs to cache different components without compensating for quantization error. MoDiff is a general framework that encompasses these methods as special cases. Specifically, MoDiff reduces to them when the cached component is quantized to 0 bits while the remaining components use full precision. (3) PTQD reduces quantization error by post-processing quantized models based on assumed error distributions. However, it ignores timestep similarities and fails under low-precision activations. To verify this, we evaluate PTQD with W8A4 on CIFAR-10, where it yields an FID of 397.12 and fails to generate meaningful images, while MoDiff achieves a significantly better FID of 13.41. **Other Strengths And Weaknesses: Q1** We believe there are some misunderstandings about the novelty of MoDiff. Specifically, (1) $\Delta$-DiT only approximates temporal differences without modulating the cache, and (2) it lacks any form of error compensation. First, $\Delta$-DiT does not approximate temporal differences but instead estimates differences between transformer blocks. Moreover, it directly reuses cached values, which introduces errors, whereas MoDiff applies lightweight modulation to reduce these errors. Second, MoDiff explicitly traces quantization errors from the previous timestep and compensates for them in the next with theoretical guarantees. In contrast, caching methods, including $\Delta$-DiT, simply reuse cached components without addressing accumulated errors. **Questions For Authors: Q1** We want to emphasize that (1) MoDiff is hardware-friendly and benefits from hardware in practice, though the hardware implementation is out of the scope of our study, and (2) MoDiff focuses on improving computational efficiency rather than memory efficiency. In addition, exploring lower-bit activation, even down to 1 bit, is valuable and aligns with ongoing research in quantization [2, 3]. First, GPU architectures support 4-bit activation computation [1], but performance degrades significantly at such low precision. MoDiff addresses this limitation by overcoming the performance bottleneck of low-precision activations. Second, low-bit activations offer much more speedup over 8-bit activations by reducing FLOPs [1], making them a valuable target for acceleration. Finally, MoDiff represents a step toward enabling low-bit activations, not the endpoint. This direction could eventually push activation precision down to 1 bit, potentially driving future hardware designs to better support ultra-low-bit computation, such as logic-based operations. [1] Int4 Precision for AI Inference. https://developer.nvidia.com/blog/int4-for-ai-inference. [2] BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models. ICLR 2025. [3] BiDM: Pushing the Limit of Quantization for Diffusion Models. NeurIPS 2024. **Questions For Authors: Q2** We want to emphasize that MoDiff is agnostic to quantization methods. Channel-wise dynamic quantization is used to show the feasibility and potential of MoDiff following the baseline [1]. In addition to channel-wise dynamic quantization, we have also reported the results on Q-Diffusion and tensor-wise quantization (more friendly to hardware) in Table 1 and Table 11 with consistent improvement over measurements. [1] DGQ: Distribution-Aware Group Quantization for Text-to-Image Diffusion Models. ICLR 2025. **Questions For Authors: Q3** To address your concern, we perform tensor-wise quantization using the DPM-Solver-2 on CIFAR-10 with 20 sampling steps. | W/A | Baseline | MoDiff | |:-:|--:|--:| | 8/8 | 3.92 | **3.91** | | 8/6 | 10.82 | **3.92** | | 8/4 | 299.72 | **26.54** | --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal. The authors provided additional validation demonstrating compatibility with DiT models and more recent solvers, such as DPM-Solver. These address part of my concern. However, the following concerns remain: - This research combines caching and quantization techniques. Instead of claiming that "the method is agnostic to quantization techniques", evaluations on more recent and advanced approaches is essential for demonstrating the effectiveness of the proposed method. - My primary concern continues to be the validity of the practical acceleration. I asked about the practical acceleration when *the activation bitwidth is lower than the weight bitwidth*. In response, the authors reference studies on binary neural networks to argue that "exploring lower-bit activations, even down to 1 bit, is valuable". However, in binary networks, **both** weights and activations are quantized, which allows multiplication operations to be replaced with significantly lower-complexity alternatives and implemented efficient even on CPUs. Similarly, in INT4 computation on NVIDIA hardware, acceleration benefits only apply when **both** weights and activations are quantized to 4 bits (i.e., W4A4). Therefore, I still question the practical adoptability of the configuration used in this paper (W8A4). In fact, MoDiff only shows comparable performance to the baseline under the practically relevant W8A8 setting. Therefore, I keep the original review score. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. However, there appear to be some misunderstandings regarding our results. To address your concerns, we provide clarifications and include additional experiments. **1. This research combines caching and quantization techniques. Instead of claiming that "the method is agnostic to quantization techniques", evaluations on more recent and advanced approaches is essential for demonstrating the effectiveness of the proposed method.** To address this concern, we conducted additional experiments during the rebuttal phase using PTQ4DiT and MixDQ. The consistent improvements observed in both cases illustrate the generalization capability of MoDiff [1,2]. The PTQ4DiT results are included in our response to you, while the MixDQ results are provided in our response to Reviewer YCE9. [1] PTQ4DiT: Post-training Quantization for Diffusion Transformers. NeurIPS 2024. [2] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization. ECCV 2024. **2. My primary concern continues to be the validity of the practical acceleration. I asked about the practical acceleration when the activation bitwidth is lower than the weight bitwidth. In response, the authors reference studies on binary neural networks to argue that "exploring lower-bit activations, even down to 1 bit, is valuable". However, in binary networks, both weights and activations are quantized, which allows multiplication operations to be replaced with significantly lower-complexity alternatives and implemented efficient even on CPUs. Similarly, in INT4 computation on NVIDIA hardware, acceleration benefits only apply when both weights and activations are quantized to 4 bits (i.e., W4A4). Therefore, I still question the practical adoptability of the configuration used in this paper (W8A4). In fact, MoDiff only shows comparable performance to the baseline under the practically relevant W8A8 setting.** We believe the reviewer may have overlooked some key content in our paper. MoDiff yields improvements not only under the W8A8 setting. Moreover, our work demonstrates the effectiveness of the algorithm independent of the weight bit width, as hardware is not the primary focus. In practice, the weight bit can be adjusted to align with specific hardware constraints. Our experimental settings follow prior works to enable a comprehensive evaluation of MoDiff across various weight bit-widths [1,2,3]. First, MoDiff improves performance even with 4-bit weights. As shown in Tables 1, 9, and 10 in our paper, MoDiff consistently reduces FID scores under both W4A6 and W4A4 settings. Second, typical PTQ implementations quantize weights before activations, meaning weight quantization operates independently of MoDiff, which targets activation quantization. Therefore, we do not focus on the choice of weight bit-width, but instead validate the effectiveness of our method across varying weight bandwidths. It remains feasible to lower both weight and activation bit-widths to match the hardware settings mentioned. [1] DGQ: Distribution-Aware Group Quantization for Text-to-Image Diffusion Models. ICLR 2025. [2] PikeLPN: Mitigating Overlooked Inefficiencies of Low-Precision Neural Networks. CVPR 2024. [3] WRPN: Wide Reduced-Precision Networks. ICLR 2018.
Summary: This paper proposed a method for accelerating diffusion model sampling by modulated quantitation and a carefully designed error compensation mechanism, the method is able to significantly reduce the accumulative error of previous methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The proof of Theorem 4.3 is correct minus some minor mistake that only affect the constant. See below suggestions. Experimental Designs Or Analyses: Yes. The experiments are thorough and give strong evidence for the claims. Supplementary Material: No. Relation To Broader Scientific Literature: The main contribution lies in the error compensation mechanism designed with modulated quantization. Previous quantization methods have the issue that error accumulates with diffusion steps. Essential References Not Discussed: I am not an expert in quantization methods for diffusion models. The diffusion model literature is properly cited in this work. Other Strengths And Weaknesses: * The paper is well presented and easy to understand with sufficient background materials. * The error compensation scheme addresses the issue of error accumulation nicely and is backed by theory. * Results on both pixel diffusion and latent diffusion show significant improvement by applying MoDiff to existing diffusion quantization methods Q-Diff and LCQ, especially in the low bit regime. I don't see much weaknesses except that the experiments can benefit from having more diverse datasets like ImageNet 256x256 and 512x512, Other Comments Or Suggestions: Minor suggestions: * In Table 1, FID column, some values are incorrectly bolded, e.g., 4.21 * In the proof of Theorem 4.2, is there a factor of two missing in equation (56)? Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for recognizing the novelty, effectiveness, and clarity of our paper. We are glad to address your questions. **1. In Table 1, FID column, some values are incorrectly bolded, e.g., 4.21** Thank you for pointing out the incorrect bold formatting. We will revise it in a new version. **2. In the proof of Theorem 4.2, is there a factor of two missing in equation (56)?** Thanks for pointing out the missing constant scalar. Here is the revised proof, which does not affect the conclusions: $$ \tilde{\mathbf{e}}\_t^2 = \\|\mathbf{o}\_t - \tilde{\mathbf{o}}\_t\\|\_2^2 $$ $$ = \\|\mathbf{o}\_t - \mathcal{A}(Q(\mathbf{a}\_t - \mathbf{a}\_{t+1})) - \tilde{\mathbf{o}}\_{t+1}\\|\_2^2 $$ $$ = \\|\mathbf{o}\_t - \mathbf{o}\_{t+1} - \mathcal{A}(Q(\mathbf{a}\_t - \mathbf{a}\_{t+1})) + (\mathbf{o}\_{t+1} - \tilde{\mathbf{o}}\_{t+1})\\|\_2^2 $$ $$ = \\|\mathcal{A}(\mathbf{a}\_t - \mathbf{a}\_{t+1}) - \mathcal{A}(Q(\mathbf{a}\_t - \mathbf{a}\_{t+1})) + (\mathbf{o}\_{t+1} - \tilde{\mathbf{o}}\_{t+1})\\|\_2^2 $$ $$ = \\|\mathcal{A}(\mathbf{a}\_t - \mathbf{a}\_{t+1} - Q(\mathbf{a}\_t - \mathbf{a}\_{t+1})) + (\mathbf{o}\_{t+1} - \tilde{\mathbf{o}}\_{t+1})\\|\_2^2 $$ $$ \leq 2\\|\mathcal{A}(\mathbf{a}\_t - \mathbf{a}\_{t+1} - Q(\mathbf{a}\_t - \mathbf{a}\_{t+1}))\\|\_2^2 + 2\\|\mathbf{o}\_{t+1} - \tilde{\mathbf{o}}\_{t+1}\\|\_2^2 $$
null
null
null
null
FedPHA: Federated Prompt Learning for Heterogeneous Client Adaptation
Accept (poster)
Summary: This paper proposes a method called FedPHA (Federated Prompt Learning for Heterogeneous Client Adaptation) to enhance federated prompt learning in diverse client environments. It addresses two key challenges: the limitation of uniform prompt lengths in existing methods and the conflict between global and local knowledge during aggregation. FedPHA introduces a federated prompt heterogeneous architecture, combining a fixed-length global prompt for efficient aggregation with variable-length local prompts to capture client-specific characteristics. To further mitigate conflicts, FedPHA incorporates SVD-based projection to filter out conflicting information and bidirectional alignment to maintain local distinctiveness while benefiting from global knowledge. The approach is communication-efficient, requiring no major modifications to pre-trained models, and significantly improves performance in federated learning settings. ## update after rebuttal The paper is well-motivated and clearly structured, presenting a novel and effective solution to the practical Federated Learning problem. After carefully reading the rebuttal, my concerns have been sufficiently addressed. Therefore, I maintain my positive rating. Claims And Evidence: Yes, the claims in the submission are supported by experimental results. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate for addressing client heterogeneity. Theoretical Claims: In this paper, the SVD-based projection mechanism has a solid theoretical foundation, and no significant issues were identified in the proofs. Experimental Designs Or Analyses: The experimental design is well-structured and valid, evaluating the proposed methods on diverse heterogeneous datasets with non-IID settings. Comparisons with five baseline methods and clearly defined hyperparameters ensure a robust assessment. Supplementary Material: The code is executable and produces correct results. Relation To Broader Scientific Literature: Existing federated prompt learning methods cannot support different local prompt lengths due to aggregation requirements and structural constraints. In contrast, FedPHA introduces a federated prompt heterogeneous architecture, combining a fixed-length global prompt for efficient aggregation with variable-length local prompts to capture client-specific characteristics. Essential References Not Discussed: No essential references appear to be missing. Other Strengths And Weaknesses: Strengths: 1. This approach tackles client heterogeneity in federated learning, improving adaptability to diverse data distributions. FedPHA is communication-efficient and scalable, requiring no extensive fine-tuning for real-world applications. 2. The motivation is clear and compelling. The federated prompt heterogeneous architecture enables aggregation while accommodating individual client needs. The SVD-based projection mechanism resolves conflicts between global and local knowledge, preserving essential local information. The bidirectional alignment function ensures alignment and preserves the unique characteristics of both global and local representations. 3. The experiment is comprehensive, covering a wide range of scenarios to thoroughly evaluate the proposed methods. Weaknesses: 1. In Section 3.2, according to the submitted code, zero padding is applied at the end of the sequence, while the paper incorrectly describes it as being added to the middle. This inconsistency between the implementation and the description should be corrected. 2. In Section 4.3, the experiments are only conducted on the Office31 dataset. It would be beneficial to include more datasets in this section to provide a more comprehensive evaluation. 3. In Table 3, a comparison with a FedPHA model using a fixed local prompt length should be added. This comparison would help demonstrate the robustness of using random prompt lengths in FedPHA. Other Comments Or Suggestions: The method proposed in this paper enables federated prompt learning to adapt to varying local prompt lengths. I hope future work can explore this aspect further. Questions For Authors: This paper proposes an innovative FPL approach. While the methodology is comprehensive, there are several concerns regarding the implementation details, additional dataset evaluation, and prompt lengths. Please address these concerns in the rebuttal. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer TNUk: We sincerely thank the reviewer for the constructive and encouraging feedback. We are especially grateful for your positive recognition of our contributions to federated prompt learning, the novelty of the proposed architecture, and the overall experimental design. Below, we address the specific concerns raised in the Weaknesses section: ### Weakness **W1: Correction of Padding Description** A1: Thank you for pointing out this discrepancy. You are correct that the current implementation applies **zero-padding at the end** of the prompt sequence, while the paper previously stated it was added to the middle. We apologize for this inconsistency. We have now **Corrected the description** in Section 3.2 to accurately reflect the implementation: zero-padding is appended to the end of the prompt sequence when needed. **W2: Broadening the Evaluation Scope with Additional Datasets** A2: Thank you for the thoughtful suggestion. We agree that evaluating the effectiveness of prompt length heterogeneity on additional datasets strengthens the generality of our findings. To this end, we have extended our analysis in Section 4.3 to include the **OfficeHome** dataset, which comprises four domains: *Art, Clipart, Product,* and *Real-World*. For this experiment, we allowed each domain to select its own prompt length from eight possible values:[4, 8, 12, 16, 20, 24, 28, 32]. We then evaluated the global classification accuracy under each prompt length configuration. We evaluated global classification accuracy under various prompt length configurations. As shown in the table below, **heterogeneous prompt length settings consistently outperformed uniform ones**. For example, the best-performing configuration on Office31 was [28, 12, 16], achieving **95.45%** accuracy, while the uniform setting [16, 16, 16] reached only 94.72%. Similarly, for OfficeHome, the best result was obtained with [12, 32, 12, 16] at **89.67%**, outperforming uniform settings such as [16, 16, 16, 16] (89.17%) and [32, 32, 32, 32] (88.41%). These consistent trends across datasets further support our claim that **prompt length heterogeneity is beneficial in federated learning scenarios with non-IID data**. **Table: Accuracy under different prompt length combinations on multi-domain datasets** | Prompt Lengths [A, W, D] | Office31 (%) | Prompt Lengths [A, C, P, R] | OfficeHome (%) | |-|-|-|-| | [4, 4, 4] | 95.13 | [4, 4, 4, 4] | 88.73 | | [16, 16, 16] | 94.72 |[16, 16, 16, 16] | 89.17 | | [32, 32, 32] | 94.05 |[32, 32, 32, 32] | 88.41 | | [28, 12, 16] (best) | **95.45** |[12, 32, 12, 16] (best) | **89.67** | | [20, 12, 28] | 95.19 | [12, 12, 28, 16] | 89.61 | | [24, 12, 32] | 95.15 | [12, 16, 20, 16] | 89.56 | Due to rebuttal space constraints, we will provide a more detailed and visual analysis (similar to Figure 4 and Figure 5) in the final version of the paper. **W3: Comparison with Fixed-Length FedPHA** A3: We thank the reviewer for this valuable suggestion. To evaluate the robustness of FedPHA under prompt length variation, we conducted a direct comparison between two settings: **FedPHA with a fixed local prompt length** (uniformly set to 16 tokens) and **FedPHA with randomly assigned prompt lengths** (ranging from 4 to 32) across clients. As shown in the table below, the fixed-length version slightly outperforms the variable-length one on both CIFAR-10 and CIFAR-100. This difference may be attributed to the greater **optimization stability** and **aggregation consistency** achieved when all clients share the same prompt structure. In contrast, randomly assigned prompt lengths may **not always align with the specific needs or data distributions of individual clients**, inevitably resulting in minor performance drops. However, this setting also reflects a more realistic federated environment, where clients may differ significantly in computational capacity or data complexity. Notably, **FedPHA is the only method capable of operating under heterogeneous prompt configurations**, offering unique flexibility and scalability in such scenarios. In more diverse and non-IID datasets (e.g., Office31, OfficeHome), we observe that **randomized or adaptive prompt lengths lead to improved performance**, highlighting the benefits of prompt flexibility in cross-domain and heterogeneous environments. **Table: Comparison with the SOTA methods using random and fixed prompt lengths on CIFAR-10 and CIFAR-100 across 100 clients.** All methods except FedPHA (last row) use a fixed prompt length of 16. | Methods|CIFAR-10|CIFAR-100| |-|-|-| | CLIP|87.88|64.89| | PromptFL|91.70|72.58| | Prompt+Prox|91.83|72.08| | FedPGP|92.10|74.81| | FedOTP|93.43|75.07| | **FedPHA (fixed length)**|**94.11**|**75.92**| | **FedPHA (random length)**|93.80|75.63|
Summary: The authors introduce Federated Prompt Learning for Heterogeneous Client Adaptation (FedPHA), a novel approach to adapting pre-trained Vision-Language Models (VLMs) within federated learning. The primary motivation is to tackle the persistent heterogeneity challenge by integrating a uniform global prompt for efficient aggregation with diverse local prompts for personalization. Additionally, FedPHA aims to mitigate conflicts between global and local knowledge, enabling clients to maintain their unique characteristics while leveraging shared information. To achieve this, Singular Value Decomposition (SVD)-based projection and bidirectional alignment are introduced, enhancing model generalization while preserving local optimization. Experimental results confirm FedPHA’s superiority over state-of-the-art methods, striking a balance between personalization and global knowledge in heterogeneous federated learning settings. Claims And Evidence: Yes, the claims are well-supported by empirical evidence, including extensive experiments and ablation studies. The results consistently demonstrate FedPHA’s superiority over baselines, validating its effectiveness in handling heterogeneity in federated learning. Methods And Evaluation Criteria: Yes, the chosen datasets cover both single-domain and multi-domain scenarios, effectively simulating heterogeneous federated learning environments. The evaluation metrics and comparison with state-of-the-art methods align well with the problem setting. Theoretical Claims: The paper primarily focuses on experimental validation and no major theoretical inconsistencies were identified. Experimental Designs Or Analyses: Yes, I reviewed the experimental setup, including dataset partitioning, evaluation metrics, and ablation studies. The methodology is sound, and the comparisons with baselines are fair. Supplementary Material: Yes, I reviewed the supplementary material, particularly additional experimental details and hyperparameter settings. The supplementary content provides further clarity on implementation. Relation To Broader Scientific Literature: The paper builds on existing work in federated learning and prompt learning by addressing heterogeneity using Singular Value Decomposition (SVD)-based projection and bidirectional alignment. It extends prior federated prompt learning approaches by enabling varying local prompt lengths, improving adaptability to non-IID data distributions. Essential References Not Discussed: The paper provides a comprehensive review of related work, citing essential studies in federated learning, prompt learning, and vision-language models. No significant omissions were identified. Other Strengths And Weaknesses: Strengths (1) The paper effectively combines federated learning and prompt learning, introducing a dual-layer prompt structure that balances global generalization and local personalization. The integration of Singular Value Decomposition (SVD)-based projection and bidirectional alignment is an innovative approach to mitigating conflicts between global and local knowledge. (2) Heterogeneous federated learning is a critical challenge, and this work provides a scalable and adaptable solution. The ability to support varying prompt lengths per client enhances personalization, making it applicable to real-world federated settings with non-IID data distributions. (3) The experiments cover multiple benchmark datasets, including single-domain, multi-domain, and cross-domain settings, demonstrating the robustness of FedPHA. Comparisons with state-of-the-art methods further validate its effectiveness. (4) The paper is well-written, with clear problem formulation, methodology descriptions, and result interpretations. Figures and tables are informative, enhancing readability. Weaknesses (1) The explanation of the SVD-based projection is unclear. Could the authors further elaborate on why and how it effectively mitigates conflicts between global and local prompts? A more detailed theoretical discussion would be helpful. (2) The choice of ratio (ρ) in Eq.(9) and alpha (α) in Eq.(12) plays a crucial role in model performance. A deeper sensitivity analysis of these hyperparameters could provide valuable insights into the stability and adaptability of FedPHA across different federated settings. (3) There are some inconsistent expressions in the paper, such as the dataset names in Table 1 and Figure 3, which are not uniform. The authors may consider carefully reviewing the wording for consistency to improve clarity and readability. Other Comments Or Suggestions: No further comments or suggestions. Questions For Authors: I hope the authors can address the concerns I raised in the Weaknesses section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer oKVN: Thank you for your thoughtful review and for raising key concerns regarding our work. We aim to address your concerns in our detailed responses below, hoping to provide clarity and demonstrate the effectiveness of our proposed approach. ### Weakness **W1: Theoretical Explanation of SVD-Based Projection** A1: Thank you for highlighting this point. We agree that a clearer explanation of the SVD-based projection mechanism is essential, especially given its role in handling heterogeneous client data. In **Section 3.3**, we now provide a more detailed account of its purpose, mathematical formulation, and intuitive significance, which we summarize below. While the dual-layer prompt architecture (global + local) provides personalization flexibility, an **explicit mechanism is required to regulate the interaction between global and local prompts**, particularly in highly heterogeneous settings where simple alignment or orthogonality may fail. Our **SVD-based projection** addresses this by filtering out potentially **conflicting directions** in the global prompt space. Formally, we apply **Singular Value Decomposition (SVD)** to the current global prompt matrix, decomposing it into orthonormal components. We then use the **trailing singular vectors (i.e., low-variance directions)** to define a null space that captures information **less dominant in the global prompt**, and potentially misaligned with client-specific distributions. The local prompt is projected onto this subspace, effectively **removing dominant global components** that could interfere with personalization. To mitigate the risk of over-suppression (i.e., losing important local information), we complement this with a **bidirectional alignment strategy** (Section 3.4). Specifically: - A **pull loss** ensures that the original and projected local prompts remain close in the feature space, preserving essential local semantics. - A **push loss** introduces a lower-bound distance between the local and global prompts, ensuring that local prompts remain sufficiently distinct and do not collapse toward overly generic representations. Together, these mechanisms strike a balance between **global knowledge integration** and **local specialization**, making the projection process both effective and robust in non-IID federated settings. We wish this richer explanation will improve the clarity and motivation of our design. **W2: Sensitivity Analysis of Hyperparameters ρ and α** A2: Thank you for raising this important concern. We agree that analyzing the sensitivity of key hyperparameters is crucial to validating the robustness and adaptability of FedPHA across different federated settings. We conducted additional ablation experiments to investigate the impact of the following two hyperparameters: - ρ (in Eq. (9)): controls the number of low-rank components used for projecting local prompts into the null space of the global prompt. - α (in Eq. (12)): defines the margin in the push loss, ensuring local prompts remain sufficiently distinct from the global prompt. We varied ρ∈ \{0.5, 0.6, 0.7, 0.8, 0.9\} to confirm the Effect of the Ratio in SVD-Based Projection. A larger ρ keeps more dominant directions from the global prompt, while a smaller ρ filters out more components. We observed that: - Performance is stable when ρ∈[0.7,0.9], suggesting that filtering out only the weaker global components is sufficient. - Extremely low ρ may lead to over-filtering and loss of useful shared information. **Table. Effect of projection ratio ρ on accuracy** | ρ | Caltech101 (%) | Office31 (%) | CIFAR-100 (%) | |-|-|-|-| | 0.5 | 97.88 | 93.21 | 74.91 | | 0.6 | 98.42 | 93.96 | 75.26 | | 0.7 | 98.86 | 94.02 | **76.37** | | 0.8 (default) | **99.05** | 94.74 | 75.63 | | 0.9 | 98.93 | **94.91** | 75.32 | We tested α∈ \{0.5, 1.0, 1.5, 2.0\} to confirm the Effect of the Margin Parameter α in the Push Loss, and found: - Performance is robust when α∈ [1.0, 1.5]. - Too small a margin (e.g., 0.5) causes local prompts to collapse toward global prompts, reducing personalization. - Too large a margin may hinder knowledge sharing, slightly degrading performance. **Table. Effect of push margin α on accuracy** | α | Caltech101 (%) | Office31 (%) | CIFAR-100 (%) | |-|-|-|-| | 0.5 | 98.11 | 93.07 | 74.02 | | 1.0 (default) | **99.05** | 94.74 | **75.63** | | 1.5 | 98.79 | **95.16** | 75.21 | | 2.0 | 98.32 | 93.85 | 74.85 | Note: All experiments use ViT-B/16 backbone and the same federated training setup as described in Section 4.1. The default setting in our main experiments is ρ=0.8, α=1.0. **W3: Consistency in Terminology and Presentation** A3: Thank you for catching this inconsistency. We have carefully reviewed and revised all dataset names and references throughout the paper to ensure consistency, especially in Table 1, Figure 3, and the surrounding text. These updates improve the overall clarity and readability of the manuscript.
Summary: FedPHA is a novel FPL framework designed to address heterogeneous client adaptation in federated learning. Traditional FPL methods enforce uniform prompt lengths, which limits their adaptability to clients with diverse data distributions. To overcome this limitation, FedPHA proposes a dual-layer prompt architecture consisting of a fixed-length global prompt for efficient aggregation and variable-length local prompts to preserve client-specific knowledge. To mitigate conflicts between global and local knowledge, FedPHA incorporates svd-based projection to filter conflicting information and bidirectional alignment to maintain balance between generalization and personalization. Through these innovations, FedPHA provides a scalable, robust, and effective solution for federated learning across diverse, non-IID clients, ensuring both efficient knowledge sharing and local adaptability. ## update after rebuttal Claims And Evidence: YES Methods And Evaluation Criteria: Yes, the chosen evaluation metrics align with the standard practices in the field. Theoretical Claims: Authors provide detailed definitions and analysis in Sec 3, including svd-based projection and bidirectional alignment. Experimental Designs Or Analyses: Experimental results are comprehensive, with evaluations conducted across various datasets and comparisons against multiple baseline methods. Supplementary Material: Authors provide the algorithm code, which helps in understanding the method pipeline and implementation details. Relation To Broader Scientific Literature: Authors introduce a novel approach to address heterogeneous client adaptation in FL and mitigate conflicts between global and local knowledge in FPL. Essential References Not Discussed: None Other Strengths And Weaknesses: Pros: P1: FedPHA addresses client heterogeneity in FPL by introducing a dual-layer prompt architecture, combining a fixed-length global prompt for efficient aggregation with variable-length local prompts to adapt to diverse client data, ensuring better personalization while maintaining federated learning efficiency. P2: This design enhances personalization without sacrificing global knowledge sharing, enabling clients to retain local characteristics while benefiting from shared representations. This structure ensures better adaptation to non-IID data distributions, making FedPHA more effective in heterogeneous federated learning settings. P3: FedPHA provides a scalable, adaptable, and robust solution for FPL on non-IID datasets, effectively accommodating diverse client distributions and improving overall model performance compared to existing methods. Cons: Q1:Many other methods also employ a Global-Local (G-L) prompt architecture for federated learning. Could the authors provide a more detailed explanation of how their approach fundamentally differs from existing G-L prompt methods? Q2:Could the authors provide more details on how the baselines were configured such as FedOTP and FedPGP, including their hyperparameter settings and any modifications made for fair comparison? Q3: The paper lacks ablation experiments to analyze the impact of its key hyperparameters. Could the authors provide additional experiments in this regard? Other Comments Or Suggestions: No Questions For Authors: Please refer to Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer tY2b: We sincerely thank the reviewer for the thoughtful and detailed evaluation of our work. We are especially grateful for the recognition of our contributions in addressing client heterogeneity through the dual-layer prompt architecture, the introduction of SVD-based projection and bidirectional alignment, and the strong experimental validations. Below, we address each of the reviewer’s key concerns: ### Weakness **Q1: Distinction from Existing G-L Prompt Methods** A1: Thank you for raising this important point. As mentioned in Lines 55–99 of the *Introduction* and Lines 141–153 of the *Related Work* section, we have cited and briefly discussed other existing works. Here, we provide a more detailed explanation of how our approach differs fundamentally from previous Global-Local (G-L) prompt-based methods in terms of architecture design: - **FedOTP:** Each client uses a global prompt and a local prompt. However, due to the requirement of the **unbalanced optimal transport framework**, FedOTP enforces that **the global and local prompts must be of equal length**, which restricts flexibility and client adaptability. - **FedPGP:** Each client uses a global prompt and **two local adapters**. In this design, the local prompt is constructed by **adding a local adapter to the global prompt**, creating a strong dependency between them. This additive formulation forces the local prompt to inherit characteristics from the global prompt. Consequently, if the global prompt captures features that are misaligned with a client’s local data distribution, the local prompt may be **forced to adapt to irrelevant or even conflicting patterns**, reducing the effectiveness of personalization. - **FedPHA (Ours):** Each client uses a fixed-length global prompt and a local prompt of varying lengths. Unlike existing G-L prompt methods that often assume a **uniform prompt length across clients**, FedPHA explicitly supports **client-specific variable-length local prompts**, enabling better alignment with heterogeneous computational capabilities and data distributions across clients. **Moreover, by decoupling the global and local prompts, FedPHA effectively mitigates the negative transfer from global knowledge to client-specific representations**, enhancing the robustness of personalization. **Q2: Baseline Implementation Details and Fair Comparison** A2: We appreciate the reviewer’s request for greater transparency regarding the baseline configurations. To ensure a fair and controlled comparison, we implemented **FedOTP** and **FedPGP** following the **same experimental protocol** described in **Section 4.1 (Implementation Details)** and **Appendix B.2 (Details of Implementation)**. Below, we provide a summary of both the shared and method-specific settings: - **Shared Training Setup:** All methods were evaluated using a **frozen CLIP backbone** (default: ViT-B/16), with **local training epochs** set to E = 1 and **communication rounds** to R = 50 (reduced to R = 25 for CIFAR-10/100). Training was conducted using **SGD** with a learning rate of 0.001 and a batch size of 32. The default **prompt length** was 16, with a 512-dimensional embedding. - **FedOTP-Specific Configuration:** - Optimal Transport type: `COT` (Unbalanced Optimal Transport) - Sinkhorn distance parameters: `THRESH = 1e-3`, `EPS = 0.1` - OT maximum iterations: `MAX_ITER = 100` - **FedPGP-Specific Configuration:** - Bottleneck dimension: `BOTTLENECK = 4` - Additional loss parameters: `mu = 1`, `temp = 0.5` All other parameters, such as context initialization (`CTX_INIT = False`), class-specific context (`CSC = False`), prompt precision (`PREC = "fp16"`), and class token position (`"end"`), were kept consistent across both baselines to match our implementation setup. These configurations ensure that the comparisons are fair and focused solely on the architectural and algorithmic differences. **Q3: Ablation of Key Hyperparameters** A3: We thank the reviewer for this valuable suggestion. We have already conducted ablation studies on several key hyperparameters and presented the results in the corresponding sections of the paper: - **Prompt Length:** The ablation on prompt length has been thoroughly analyzed in **Section 4.3 (Effectiveness of Prompt Length Heterogeneity)**, where we demonstrate that supporting variable-length prompts across clients leads to better performance under heterogeneous settings. - **Communication Rounds (R):** We provide an analysis of the effect of communication rounds in **Appendix C.1 (Convergence Analysis)**, showing how FedPHA converges efficiently and consistently across datasets. - **Hyperparameters ρ and α:** The ablation studies for the ratio ρ in Eq.(9) and the coefficient α in Eq.(12) are discussed in our response to reviewer **oKVN**, where we detail their influence on model performance and stability.
Summary: This paper introduces FedPHA, a novel Federated Prompt Learning (FPL) approach that enables heterogeneous client adaptation using Vision-Language Models (VLMs). The key contributions include: A dual-layer architecture combining a fixed-length global prompt for efficient aggregation and variable-length local prompts for personalization. A Singular Value Decomposition (SVD)-based projection mechanism to resolve conflicts between global and local prompts. A bidirectional alignment strategy to optimize the interaction between global knowledge and local adaptation. Extensive empirical validation on various datasets, demonstrating superior performance over state-of-the-art FL and FPL methods. The paper is technically sound, well-motivated, and presents strong experimental results. However, a few areas need further clarification and improvements, particularly regarding real-world feasibility, communication efficiency, and computational overhead. ## update after rebuttal Thanks to the authors for their thoughtful and comprehensive rebuttal. I am satisfied with the authors' responses and believe they addressed the concerns raised in the initial review. I maintain my positive evaluation of the paper and recommend acceptance. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: no Experimental Designs Or Analyses: yes Supplementary Material: no Relation To Broader Scientific Literature: This paper introduces FedPHA, a novel Federated Prompt Learning (FPL) approach that enables heterogeneous client adaptation using Vision-Language Models (VLMs). The key contributions include: A dual-layer architecture combining a fixed-length global prompt for efficient aggregation and variable-length local prompts for personalization. A Singular Value Decomposition (SVD)-based projection mechanism to resolve conflicts between global and local prompts. A bidirectional alignment strategy to optimize the interaction between global knowledge and local adaptation. Extensive empirical validation on various datasets, demonstrating superior performance over state-of-the-art FL and FPL methods. The paper is technically sound, well-motivated, and presents strong experimental results. However, a few areas need further clarification and improvements, particularly regarding real-world feasibility, communication efficiency, and computational overhead. Essential References Not Discussed: no Other Strengths And Weaknesses: Strengths: 1. Unlike existing federated prompt learning (FPL) methods that enforce uniform prompt lengths, FedPHA accommodates varying local prompt lengths, making it more adaptable to heterogeneous clients. 2. The SVD-based projection mechanism effectively filters out conflicting components between global and local knowledge, mitigating negative transfer effects. Weaknesses: 1. The method requires sending prompts to a central server, which may still introduce communication overhead despite avoiding full model aggregation. 2. The SVD decomposition of global prompts adds computational overhead at both the server and client side. 3. Since activations and prompts are exchanged, potential privacy risks (e.g., gradient leakage, prompt inversion attacks) could arise. Other Comments Or Suggestions: See Strengths And Weaknesses Questions For Authors: See Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer A325: We appreciate your recognition of our technical contributions—specifically the dual-layer prompt design, SVD-based projection, and bidirectional alignment—as well as your acknowledgment of our method’s strong performance and adaptability to heterogeneous clients. Below, we address your concerns regarding real-world feasibility, communication efficiency, and computational overhead. ### Weakness **W1: Communication Overhead** A1: Thank you for this important observation. While **FedPHA** introduces communication through prompt exchange, we emphasize that prompts are significantly smaller in size compared to full model weights—typically in the order of **KBs vs. hundreds of MBs** for vision-language models (VLMs). Although we do not report the classification accuracy of **FedAvg with full CLIP models** due to **prohibitive computational and communication costs** in our experimental environment, we include it in the comparison table to highlight the dramatic difference in communication overhead. Specifically, FedAvg requires transmitting approximately **497.3 MB per round**, which is over **7,700× larger** than prompt-based methods (64 KB). This made full-model training infeasible within our available resources. Prior studies show that prompt-based methods can match or exceed full-model FL under non-IID settings. We thus focus on efficient prompt-based alternatives better suited for real-world, resource-limited scenarios. In the final version, we will add a more detailed quantitative analysis of communication overhead, comparing **FedPHA** with traditional FL and prompt-based FL (FPL) baselines, and demonstrate that FedPHA achieves a highly favorable trade-off between communication efficiency and classification performance. **Table: Comparison of communication overhead and classification accuracy across different methods** |Method|Transmitted Content|Total Comm. / Round|Caltech101 (%)|Office31 (%)|CIFAR-100 (%)| |-|-|-|-|-|-| |FedAvg+CLIP|Full model weights (CLIP ViT-B/16)|497.3 MB|—|—|—| |PromptFL|Prompt (configurable)|64 KB|93.47|88.08|72.58| |FedOTP| Global prompt (configurable)|64 KB|98.11|89.19|75.07| |FedPGP| Global prompt (configurable)|64 KB|95.86|91.58|74.81| |**FedPHA (Ours)**|Global prompt (configurable)|**64 KB**|**99.05**|**94.74**|**75.63**| **W2: Computational Cost of SVD** A2: We acknowledge the reviewer’s concern regarding the additional computation introduced by the SVD-based projection. However, this operation is performed only **once per communication round**, rather than per training batch, making the overall overhead negligible across multiple rounds. On the client side, both **global prompt decomposition** and **local prompt projection** involve only **low-rank updates**, which are lightweight and computationally efficient. On the server side, since only global prompts are aggregated—not decomposed or projected—the SVD operation incurs no additional computational burden. Moreover, the global prompt typically has a **modest dimensionality** (e.g., 4–32 tokens × hidden size), making the SVD operation significantly cheaper than standard training procedures such as forward and backward passes in vision-language models (VLMs). To support this claim, we will include the following runtime statistics in the final version. **Table. Runtime overhead of SVD-based prompt projection.** All results are averaged over 10 communication rounds. Compared to model training, the SVD overhead is negligible, even in large-scale settings. |Operation Stage|Device|Input Dimension|Avg. Time (ms)|Relative Training Time (%)| |-|-|-|-|-| |Global prompt decomposition|Client|16 × 768|4.2|**<1%**| |Local prompt projection|Client|Local prompt × 768|2.8|**<1%**| |Local model training|Client|Image input × model|4536.8|—| **W3: Potential Privacy Risks** A3: We thank the reviewer for raising this important point. While privacy is not the central focus of this work, we note that our current implementation exchanges only prompt embeddings—**not raw data or model gradients**—which already mitigates direct privacy exposure to a significant extent. Compared to PromptFL, FedPHA transmits **global prompts** instead of **personalized local prompts**, reducing the risk of leaking client-specific information. Moreover, FedPHA addresses a key limitation in FedPGP, where **local prompts are dependent on global prompts**, potentially increasing inversion risk. By design, our approach improves decoupling and provides a more privacy-preserving prompt structure. We acknowledge that embedding inversion attacks remain a broader challenge in the federated learning landscape. Although a comprehensive privacy analysis is beyond the scope of this work, we view this as a valuable direction for future research. FedPHA is also compatible with **differential privacy**, **prompt obfuscation**, and **secure aggregation**, which can be explored in follow-up studies to further enhance privacy protection.
null
null
null
null
null
null
Breaking Barriers: Combinatorial Algorithms for Non-Monotone Submodular Maximization with Sublinear Adaptivity and $1/e$ Approximation
Accept (poster)
Summary: This paper studies the problem of maximizing a non-monotone submodular function subject to a size constraint. In the problem, we are given a set $\mathcal{U}$ of $n$ elements, (a value-oracle of) a non-monotone submodular function $f$, and a positive integer $k$, and the goal is to find a subset $S$ of $\mathcal{U}$ that maximizes $f(S)$ subject to $|S|\leq k$. A well-known randomized approximation algorithm for the problem achieves the $1/e$-approximation (in expectation). The query complexity of this algorithm is $O(kn)$, which was later improved to $O(n)$ (for some regime of $k$). Additionally, to further improve the scalability, much work has studied parallelizable algorithms for the problem. The parallelizability of an algorithm is measured by the adaptive complexity, i.e., the number of buckets needed to divide the queries to $f$ into adaptive rounds such that within each round the sets to be queried only depend on the results of queries in previous rounds. The lower the adaptive complexity, the higher the parallelizability because the queries in each round can be arbitrarily parallelizable. The current best approximation ratio among those achieved by the algorithms with a sublinear adaptive complexity is $1/e-\epsilon$; however, the algorithm achieving that ratio needs to estimate the multilinear extension of $f$, requiring a substantial number of queries to the original $f$ and making it impractical. The main contribution of this paper is a combinatorial $(1/e-\epsilon)$-approximation algorithm with a sublinear adaptive complexity and a nearly-linear query complexity (to the original $f$). ## update after rebuttal I maintain my overall recommendation (weak reject) based on the discussion with the authors. Claims And Evidence: Although a lot of evidences (and even the proposed algorithm itself) are presented in the appendix, the claims seem to be supported. Methods And Evaluation Criteria: The proposed $(1/e-\epsilon)$-approximation algorithm is called ParallelInterpolatedGreedy (PItG). This algorithm repeats the subroutine called ParallelInterlaceGreedy (PIG), which is also a novel component and achieves the $(1/4-\epsilon)$-approximation. PIG can be seen as a parallel counterpart of the existing non-parallel InterlaceGreedy and InterpolatedGreedy. To obtain PIG and its approximation ratio analysis, the authors first introduce the blended marginal gains strategy, which provides multiple upper bounds on the marginal gain of an element with respect to sets kept in the algorithm. This strategy simplifies InterlaceGreedy and InterpolatedGreedy and enables to incorporate thresholding techniques to reduce their query complexity. Theoretical Claims: I checked the correctness of the contents in the main body. Other than some undefined notation and typos, I did not find any technical issue. Experimental Designs Or Analyses: The proposed algorithms are evaluated using two applications, i.e., Maximum Cut and Revenue Maximization, with large-scale real-world networks. The authors state that they only provide the results for er in the main body, but the results for twitch-gamers are also given there. The selection criterion of baseline methods is unclear. There are many existing parallel algorithms for the problem that are listed in Table 1 but not tested in the experiments. Also, regarding the implementation of the baseline methods, it seems unfair to employ a random subset for the algorithms that require to solve the unconstrained problem. Supplementary Material: I only looked into the pseudocodes of the algorithms. Relation To Broader Scientific Literature: Submodular maximization has actively been studied in machine learning. Dealing with non-monotone submodular functions is still important, and the parallelization is a reasonable way to accelerate the algorithms. The proposed algorithm is the first combinatorial $(1/e-\epsilon)$-approximation algorithm with a sublinear adaptive complexity and a nearly-linear query complexity, which could be a good addition to the literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I acknowledge the technical contribution of the paper but also believes that the presentation could have been done better in both non-technical and technical parts. Some suggestions are given below: - The presentation of the technical contents should be improved. The most significant issue is that the authors directly dive into the technical details and do not explain the overall algorithm design and analysis before that. This makes the technical contents quite puzzled. It is understandable that Algorithms 1 and 2 are improved versions of the existing InterlaceGreedy and InterpolatedGreedy, respectively, but how those algorithms are utilized in the proposed algorithms is quite unclear. Do the authors really need Algorithm 1 to design the proposed algorithms? Also, the behavior of the thresholding techniques used in the proposed algorithms are not explained in the main text. What the property of the alternating additions means should be explained. - The problem of maximizing non-monotone submodular functions is not motivated accordingly. Indeed, the application examples listed by the authors are mostly with monotone submodular functions (e.g., influence maximization). - There are some descriptions that do not make sense. For instance, in the abstract, the authors state that they propose a $(1/4-\epsilon)$-approximation algorithm with high probability for the problem, but the approximation ratio itself is nothing new, so mentioning it as one of the proposals would be weird for the readers. Later it turns out that the algorithm is a preliminary result, which should have been explicitly mentioned. The same thing can be pointed out in Section 1.1. In Section 3.1, the authors state that intuitively adding an element from $O$ to the solution should not negatively impact it and the question "can the second interlaced greedy step be eliminated?" naturally arises. However, the question is not reasonable. If the addition does not negatively impact the solution, the second interlaced greedy step should work well, just meaning that the first step can be eliminated. Other Comments Or Suggestions: Typos and minor things: - Many equations are presented using smaller fonts. - Sec 1: $\mathcal{U}$ and $\epsilon$ are undefined. - Tab 1: The notation || is undefined. - Sec 3.1: the maximum singleton $a_0$ is undefined. - Alg 1: dummy elements should be defined. - Alg 2: "size of solution" is misleading. Questions For Authors: See Other Strengths and Weaknesses. Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Inappropriate Potential Applications & Impact (e.g., human rights concerns)'] Ethical Review Concerns: This paper doesn't include impact statements. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback. Regarding the suggestions related to paper presentation and organization, we have provided detailed responses to each point in our reply to Reviewer KJuz. For other questions, we address each point below. ``` The authors state that they only provide the results for er in the main body, but the results for twitch-gamers are also given there. ``` Thank you for pointing this out. We will correct our statement in the next version. ``` The selection criterion of baseline methods is unclear. ``` Thank you for the suggestion. In our current experiments, we focused on parallel algorithms designed specifically for size-constrained problems. While algorithms like ParKnapsack, ParSKP, and ParSSP can handle more complex constraints, such as knapsack constraints, they currently suffer from high query complexity, which limits their practicality. For completeness, we plan to include experimental results for these algorithms in our next version. ``` ... it seems unfair to employ a random subset for the algorithms that require to solve the unconstrained problem. ``` We appreciate the reviewer’s observation regarding the use of a random subset for baseline methods that require solving the unconstrained problem. We would like to clarify the rationale behind our experimental design: - The $1/2$-approximation constant-adaptive algorithm proposed by [1] is used for ANM, AST, and ATG. However, this algorithm relies on querying the multilinear extension $F$ of the objective function, whose estimation requires an impractically large number of queries to the value oracle $f$. This makes the algorithm prohibitively expensive to run in experiments, despite its theoretical guarantees. - Based on our experimental evaluation, the solutions returned by unconstrained algorithms (e.g., [1]) do not dominate other solutions. Thus, their inclusion would not meaningfully improve objective values in practice. - In summary, while implementing [1] would align with theoretical comparisons, it would incur significant query complexity without commensurate gains in solution quality. Our goal was to focus on practical and scalable baselines, which better reflect real-world constraints. [1] Chen, L., Feldman, M., & Karbasi, A. (2019). Unconstrained submodular maximization with constant adaptive complexity. In Charikar, M., & Cohen, E. (Eds.), Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, Phoenix, AZ, USA, June 23-26, 2019, pp. 102–113. ACM. ``` The problem of maximizing non-monotone submodular functions is not motivated accordingly. Indeed, the application examples listed by the authors are mostly with monotone submodular functions (e.g., influence maximization). ``` We appreciate the reviewer’s feedback. For the same application, different submodular functions can be defined as either monotone or non-monotone. For example, influence maximization can have non-monotone variants when considering competition or oversaturation effects. We also acknowledge that some of our cited papers focus on monotone functions. To clarify this distinction, we will differentiate between monotone and non-monotone papers and add the following paper as a non-monotone variant for the revenue maximization application. - Amanatidis, Georgios, et al. "Fast adaptive non-monotone submodular maximization subject to a knapsack constraint." Advances in neural information processing systems 33 (2020). ``` ... but the approximation ratio itself is nothing new, so mentioning it as one of the proposals would be weird for the readers ... ``` We appreciate this insightful observation. While the 1/4-approximation ratio itself is not novel, our algorithm achieve this ratio with high probability ($\ge 1-1/n$). In Cui et al. (2023), ParSSP achieves $1/4$ approximation ratio in expectation. We will revise our abstract as follows which leads with our primary 1/e-approximation result and present the 1/4-result as a byproduct. "... In this work, we focus on size constraints and present the first combinatorial algorithm matching this bound - a randomized parallel approach achieving $1/e − \varepsilon$ approximation ratio. This result bridges the gap between continuous and combinatorial approaches for this problem. As a byproduct, we also develop a simpler $(1/4 − \varepsilon)$-approximation algorithm with high probability $( \ge 1 − 1/n)$..." ``` In Section 3.1, the authors state that intuitively adding an element from $O$... just meaning that the first step can be eliminated. ``` We sincerely appreciate this insightful observation. We agree that the explanation here might be misleading. We will revise the text to remove ambiguous descriptions as follows. "This complex architecture naturally raises a fundamental question: Can we develop an alternative analysis framework that eliminates the need to explicitly guess the position of $o_{\max}$?" --- Rebuttal Comment 1.1: Comment: Thank you for taking my comments into account. As mentioned in my original feedback, my main concern is about the quality of presentation in both the technical and non-technical sections. I appreciate the detailed comments on the planned revisions. However, given the extent of the necessary changes across the entire paper, I can’t guarantee that the paper will be ready for publication after these revisions.
Summary: This work introduces enhanced solutions for maximizing a non-monotone submodular function under a cardinality constraint. The authors concentrate on solution quality, query count, and adaptivity, which are the key performance indicators in this area of research. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes Relation To Broader Scientific Literature: This paper presents results that are relevant to researchers in submodular maximization Essential References Not Discussed: This paper provides a deep comparison with the publications in this field. Other Strengths And Weaknesses: "+" The paper is well-written. "+" Thorough and clearly stated comparisons with previous work are provided. "+" The results are interesting, and the techniques might be applicable to other problems in this field. "+" Bridging the gap between continuous and combinatorial algorithms is of high interest. "+" The experimental section uses well-known applications in the field. "+" The strengths and weaknesses of the approach are clearly stated. "-" The results provide marginal improvement compared to the state of the art. "-" The experimental section shows that the proposed algorithms have higher quality but also higher queries and adaptivity in some cases. Other Comments Or Suggestions: The paper would benefit from a clearer explanation of its core and novel ideas, which will improve its impact. The algorithm's complexity made it a bit hard to follow; providing more intuition would improve understanding. Although the paper does not present an impact statement, I do not have any ethical concern. Questions For Authors: Could you please provide a highlight of the new ideas in this work. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. Regarding the suggestions related to paper presentation and organization, we have provided detailed responses to each point in our reply to Reviewer KJuz. ``` The results provide marginal improvement compared to the state of the art. ``` We sincerely appreciate the reviewer’s feedback. While the improvement in approximation ratio may appear incremental at first glance, we would like to clarify the significance of our contribution. Our work represents the first advancement in five years since the breakthrough $1/e$-approximation achieved by a parallel continuous algorithm (Ene & Nguyen, 2020). Specifically, we improve the approximation ratio for parallel combinatorial algorithms from $1/4$ to $1/e$, bridging a key gap between combinatorial and continuous approaches. Given that the current best-known approximation ratio for non-monotone submodular maximization is 0.401 [1], an improvement exceeding 0.11 is, in fact, a meaningful step forward in this challenging problem space. [1] Buchbinder, N. and Feldman, M. Constrained submodular maximization via new bounds for dr-submodular functions. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC, 2024.
Summary: The problem of submodular maximization under a cardinality constraint is a fundamental topic with numerous applications. The submodular function, defined on a ground set $U$, can be either monotone or non-monotone, with the non-monotone case being notably more challenging. There is a wealth of results on non-monotone submodular maximization under a cardinality constraint. The current state-of-the-art algorithm is a $1/e$ approximation algorithm based on continuous greedy, which requires $O(k^2 n \log^2(n))$ oracle queries. This paper introduces a sublinear algorithm that achieves the same $1/e$ approximation guarantee but uses only $O(n \log(n) \log(k))$ queries. This makes the algorithm sublinear, assuming that $O(nk)$ is considered the linear query complexity regime. Furthermore, the paper presents a parallel version of this algorithm, which requires $O(\log(n) \log(k))$ adaptive rounds. Overall, the paper is well-written, and I appreciate the contributions. However, there are a few aspects that need improvement: 1. Clarity and Readability: The paper requires significant polishing. The current version is difficult to follow, even for an expert in the area. Frequent references to the appendix make the paper hard to read, as it requires constant switching between the main text and the appendix. I recommend minimizing these references and integrating more details directly into the paper to improve its flow. 2. Algorithm Overview: The paper lacks a clear, intuitive overview of how the sublinear and parallel algorithms function. Section 5 describes the algorithmic steps but does not provide sufficient intuition for why these steps are taken and how they contribute to the solution. While Sections 5.1 and 5.2 attempt to offer an overview, they are cluttered with references to the appendix, which makes them hard to follow. I suggest adding a more detailed, intuitive explanation of the algorithm's workings, especially in the early sections. 3. Unusual Notations: In Algorithm 3 (the main algorithm), the notations used in Lines 4 and 8 are unconventional. It is unclear whether these notations are intentional or typographical errors. There is no explanation of these symbols in the paper, which makes it difficult to understand their meaning. I recommend clarifying or correcting these notations. 4. Preliminaries Section: The "Preliminaries" section is overly brief and lacks a proper explanation of key concepts, such as the parallel model and adaptivity. These concepts should be clarified earlier in the paper. For example, when Algorithm 3 and the "UPDATE" procedure refer to parallelism, it is not clear whether this refers to $\ell$ parallel machines or some other form of parallelism. A more detailed explanation is needed. 5. Table 1: Table 1 is difficult to follow. It includes many results, some of which are not essential for understanding the key contributions of the paper. I recommend simplifying the table to focus on the most important results. 6. Algorithm Structure: I suggest reintroducing the procedures "UPDATE", "DISTRIBUTE", and "Pre-fix selection" into the main body of the paper, or alternatively, dedicating a separate section to explain them. Additionally, Algorithm 3 could be split into two or three sub-algorithms to improve readability—one for Lines 6-14 and another for Lines 16-24. General Suggestions: - Overview and Intuition: It would be helpful to include a concise, intuitive overview of the algorithm in the first few pages, which explains its core idea and key steps. This will make it easier for the reader to understand the subsequent details. - Sublinear Query Complexity: While I understand the concept of sublinear query complexity, it would be useful for the paper to explicitly define and explain it. This would ensure that the reader has a clear understanding of what is meant by sublinear complexity in the context of this paper. - Parallel Model and Adaptive Rounds: A more detailed explanation of the parallel model and the concept of adaptive rounds is necessary. This is a crucial aspect of the algorithm, and it would benefit from a clearer definition and description. Overall, the paper presents an interesting and valuable contribution. With some revisions to improve clarity, provide more intuition, and ensure consistency in notation and terminology, the paper would be much more accessible and impactful. Claims And Evidence: Nothing Methods And Evaluation Criteria: Nothing Theoretical Claims: The correctness of the theoretical claims was not explicitly questioned in the review. However, the review does highlight issues related to clarity, notation, and organization, which may indirectly affect the readability and verification of the proofs. Specifically: Notation Issues: The review notes that some notations in Algorithm 3 (Lines 4 and 8) are unconventional and unclear. If these notations are crucial for the proofs, their ambiguity could make it difficult to verify correctness. It is recommended to clarify or correct these notations. Appendix References: Frequent references to the appendix make it hard to follow the logical flow of the paper. If key proofs rely on these sections, integrating more details into the main text would help in assessing their correctness. Preliminaries and Definitions: The review points out that the "Preliminaries" section is overly brief, lacking proper explanations of key concepts like the parallel model and adaptivity. If these concepts play a role in the proofs, their insufficient explanation could hinder proper verification. While no specific errors in the theoretical results were identified, improving the clarity, notation, and structure of the paper would facilitate a more thorough and accessible verification of the proofs. Experimental Designs Or Analyses: For me, the most important part was the theory part. Supplementary Material: No Relation To Broader Scientific Literature: The paper contributes to the well-studied problem of submodular maximization under a cardinality constraint, which has numerous applications. The broader scientific literature on this topic includes both monotone and non-monotone submodular functions, with the non-monotone case being particularly challenging. A key reference point in the literature is the state-of-the-art $1/e$ approximation algorithm based on continuous greedy, which requires $O(k^2 n \log^2(n))$ oracle queries. The primary contribution of this paper is introducing a sublinear algorithm that achieves the same $1/e$ approximation guarantee but significantly reduces the query complexity to $O(n \log(n) \log(k))$. This advancement aligns with recent trends in submodular optimization that seek more efficient algorithms with lower query complexity. Additionally, the paper extends its contribution by presenting a parallel version of the algorithm, which requires only $O(\log(n) \log(k))$ adaptive rounds. This connects to the broader literature on parallel submodular optimization, an area of increasing importance for large-scale applications. However, the review suggests that the paper could benefit from a clearer discussion of the parallel model and adaptivity, ensuring that the contribution is well-situated in the context of existing work. In summary, the paper builds on and improves key results in submodular maximization, particularly by introducing a sublinear and parallelizable approach while maintaining the best-known approximation guarantee. To strengthen its connection to prior literature, the paper should explicitly compare its results to previous work in terms of efficiency, adaptivity, and practical implications. Essential References Not Discussed: no Other Strengths And Weaknesses: It's given in the review. Other Comments Or Suggestions: It's given in the review. Questions For Authors: It's given in the review. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewers for their constructive suggestions. We will revise the manuscript to improve clarity and presentation. In Section 1.1, we clarify the technical contributions, focusing on how the simplified InterpolatedGreedy enables parallelization and enhancing the explanation of our parallel algorithm (PIG). In Section 2, we refine definitions of adaptivity, parallel algorithms, and the thresholding technique. We restructure Section 3, remove Section 4 for conciseness, and improve Section 5 with an overview, pseudocode, and key subroutine introductions. Below, we detail these changes. 1. Section 1.1. Contributions. - Our first contribution is a technical contribution that simplifies both InterpolatedGreedy and InterlaceGreedy. Importantly, by eliminating the low-success-probability requirement, the simplified InterpolatedGreedy serves as an efficient framework for parallel algorithm design. We will clarify this in the revised manuscript as follows. "... Most importantly, these simplified variants establish the theoretical foundation for parallel algorithms. By removing branching dependencies and probabilistic guesswork, our framework enables the first efficient parallelization of these interlaced greedy approaches while preserving their approximation guarantees." - Our second contribution is the proposed parallel algorithms, where we explain how integrating the interlaced greedy approach with threshold sampling achieves both theoretical guarantees and practical efficiency. The revised version is below. "... The core innovation of PIG lies in its novel threshold sampling procedure, which simultaneously preserves the crucial alternating selection property of interlaced greedy methods while enabling efficient parallel implementation. Like prior parallel algorithms, PIG maintains descending thresholds for each solution, adding elements whose average marginal gain exceeds the current threshold. However, PIG introduces two critical modifications to maintain the interlaced greedy structure: 1) strict synchronization of batch sizes across all $\ell$ parallel solutions, and 2) coordinated element selection to maintain sufficient marginal gain for each solution. This design achieves three fundamental properties. First, it preserves the essential alternating selection property of the interlaced greedy methods. Second, through threshold sampling, it geometrically reduces the size of candidate sets - crucial for achieving sublinear adaptivity. Third, its efficient batch selection ensures each added batch provides sufficient marginal contribution to the solution. Together, these properties allow PIG to match the approximation guarantees of the vanilla interlaced greedy method while achieving parallel efficiency..." 2. Section 2. Preliminary - We will include formal definitions of adaptivity and parallel algorithms in the revised manuscript. Additionally, we will clarify the concept of sublinear adaptive algorithm. - We will provide a concise introduction to the thresholding technique, explaining its role in the parallel algorithms. 3. Section 3. The Blending Technique for Greedy - We will retitle this section as "A Parallel-Friendly Greedy Variant via Blending Technique" to better reflect its focus on developing an efficient parallelizable algorithm. - We will focus on the simplified InterpolatedGreedy (achieving a $1/e$-approximation) and leave the simplified InterlaceGreedy (achieving a $1/4$-approximation) in appendix. - The structure of the section will be as follows: - Section 3.1: An Overview of Prior Work and Its Limitation. - Summarizes how InterpolatedGreedy works and identifies key bottlenecks for parallelization. - Section 3.2: Motivation for Simplification. - Explains the rationale behind our approach and its advantages. - Section 3.3: Technical Overview of Blended Marginal Gains Strategy for InterpolatedGreedy. - Provides a high-level explanation of how the blended marginal gains strategy enhances InterpolatedGreedy's efficiency. 4. Section 4. Preliminary Warm-Up of Parallel Approaches: Nearly-Linear Time Algorithms (Removal) - To streamline the presentation and avoid redundancy, we will remove this section. This adjustment allows us to dedicate more space to Sections 3 and 5, where we will elaborate on the high-level ideas and technical details of the proposed algorithms. 5. Section 5. Sublinear Adaptive Algorithms - Section 5.1: Subroutines for PIG. - This subsection will include the pseudocodes for key subroutines (Distribute, Update, Prefix-Selection) and provide a high-level description of their functionality. - Section 5.2: Algorithm Overview - Here, we will introduce the core intuition behind the algorithm, emphasizing its novel components and how they collectively achieve sublinear adaptivity.
Summary: The paper studies the non-monotone submodular maximization in the parallel model. They provide two algorithms with the best one having a $1/e$ approximation factor, $O(log(n)log(k))$ adaptivity and almost linear query complexity.Previously, the only algorithm with a $1/e$ approximation factor and logarithmic adaptivity relied on continuous approaches, which required many queries. Claims And Evidence: They provide proof for their claims. Methods And Evaluation Criteria: Their experiments seem reasonable, but the graph on solution value is missing the data for the ATG algorithm, which appears to have better adaptivity and query complexity. Comparing with PARSSP would also be interesting, as it achieves better approximation factors than other logarithmic adaptivity algorithms. Theoretical Claims: I reviewed the proofs in the main part of the paper, but not the appendix. Experimental Designs Or Analyses: No. Supplementary Material: No. Relation To Broader Scientific Literature: There are many works on this problem, and among the works with logarithmic adaptivity (which is usually the most important factor for parallel algorithms), most works have worse approximation factors compared to them and the only work that matches their approximation factor uses continuous methods which are not practical. Essential References Not Discussed: No Other Strengths And Weaknesses: Their result is interesting because it achieves the best-known approximation factor with logarithmic adaptivity and near-linear query complexity. However, a limitation of their approach is that it does not achieve $O(\log n)$ adaptivity, unlike the continuous algorithm. Other Comments Or Suggestions: Please add the ATG data in Figures 2a and 2d. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ``` Their experiments seem reasonable, but the graph on solution value is missing the data for the ATG algorithm, which appears to have better adaptivity and query complexity. ``` Thank you for pointing this out. Since the objective value is normalized by ATG, the ATG data would appear as a horizontal line with value 1 in the graph. To improve clarity, we will add this line to the graph. ``` Comparing with PARSSP would also be interesting, as it achieves better approximation factors than other logarithmic adaptivity algorithms. ``` Thank you for the suggestion, we will add the comparison to PARSSP in the next version.
null
null
null
null
null
null
Gradient Flow Provably Learns Robust Classifiers for Orthonormal GMMs
Accept (poster)
Summary: This paper investigates the problem of adversarial robustness in deep learning classifiers and provides a theoretical framework demonstrating that standard training methods, specifically gradient flow, can lead to a provably robust classifier under certain conditions. Unlike existing approaches that require adversarial training or explicit defense mechanisms, this work focuses on the natural robustness that emerges from the structure of the data. This work focuses on a binary classification problem where data is drawn from an isotropic Gaussian mixture model with orthonormal cluster centers. The theoretical study first establish the limit of adversarial robustness by showing that no classifier can defend against an L2 attack of radius greater than $\sqrt{2}/2$, while also proving the existence of an optimal classifier that achieves this bound. This classifier follows a nearest-cluster rule, assigning labels based on the closest cluster center in feature space. The core result of the work is the proof that a two-layer neural network with a polynomial ReLU (pReLU) activation function, trained using gradient flow, can successfully learn this optimal robust classifier without requiring adversarial training. This stands in contrast to standard ReLU networks, which fail to achieve robustness due to their inability to internally learn the multi-cluster structure of the data. This work further provides a rigorous convergence analysis demonstrating that gradient flow on pReLU networks can find a classifier that is not only highly accurate on clean data but also robust to adversarial perturbations. Claims And Evidence: The claims made in the submission are supported by a combination of theoretical proofs and empirical observations. The core theoretical contributions, particularly the characterization of the maximum achievable adversarial robustness and the proof that gradient flow on a two-layer network with a polynomial ReLU activation can learn a robust classifier, are well-supported by rigorous mathematical derivations. This work establishes upper bounds on robustness and demonstrate that their proposed classifier achieves these bounds, providing convincing justification through formal theorems and proofs. One of the key strengths of the paper is its detailed convergence analysis, which demonstrates that gradient flow on a properly parameterized network finds a nearly optimal robust classifier under the assumed data distribution. This theoretical backing is essential in supporting the claim that standard training, without adversarial examples, can yield robustness in certain structured settings. The results align with prior works that study adversarial robustness in relation to data geometry. Methods And Evaluation Criteria: The proposed method in this paper is aligned with the theoretical analysis. The primary methodological contribution is the demonstration that gradient flow on a two-layer neural network with a polynomial ReLU (pReLU) activation function can provably learn a robust classifier under the specified conditions. This is a novel and interesting result, particularly in showing that standard training methods can achieve adversarial robustness **without explicit adversarial training**. The evaluation criterion used to assess the robustness of the learned classifier—the maximum L2 perturbation it can withstand while maintaining high accuracy—is a reasonable and widely accepted metric in adversarial robustness research. The paper provides formal derivations to establish the theoretical upper bound on robustness and proves that the proposed method can achieve this bound empirically, which is a strong result. Theoretical Claims: I checked the key theoretical proofs provided in the submission and found that the high-level arguments and overall structure of the proofs are sound. The authors develop their results with careful attention to the necessary assumptions, such as the non-degenerate parameter initialization and the scaling conditions required for the convergence analysis of the gradient flow on the pReLU network. Their derivations for the maximum L2-robustness and the subsequent convergence guarantees are presented with clarity and logical consistency. While I do not verify every technical detail down to the last inequality, the main steps and techniques—such as the use of concentration inequalities and the analysis of activation dynamics—appear to be correct and align with established methods in the literature. Overall, I think the proofs convincingly support the theoretical claims in the paper. Experimental Designs Or Analyses: I also checked the experiments presented in the paper. The experiments primarily serve to illustrate key theoretical results rather than to provide a comprehensive empirical evaluation of the proposed approach. The authors use synthetic data generated from an Orthonormal Gaussian Mixture Model (GMM), which is a well-chosen setting to validate their theoretical analysis but does not necessarily reflect the complexities of real-world datasets. The experimental design effectively demonstrates the failure of standard ReLU networks in learning robust classifiers and the corresponding success of pReLU networks under gradient flow. The figures provided show that specific activation functions play a crucial role in achieving robustness, which supports their claim that the architecture must be designed to exploit the structure of the data. However, the evaluation lacks a comparison with standard adversarial training techniques or alternative defense mechanisms, making it difficult to assess how well the proposed approach performs relative to existing state-of-the-art methods. One major concern over the experiment design is the absence of evaluations on widely-used benchmark datasets such as CIFAR-10, MNIST, or ImageNet. While synthetic experiments are useful for validating the theoretical results, their practical significance remains unclear. The paper does not explore how well the proposed method generalizes to more complex, high-dimensional data distributions where the assumptions of the Orthonormal GMM may not hold. In summary, I think the experimental design supports the theoretical findings within the specific scope of the problem but does not fully validate the broader applicability of the approach. A more extensive empirical evaluation, including real-world datasets and comparisons with adversarial training methods, would be necessary to establish its practical relevance. Supplementary Material: Yes, I have checked appendix. They provides proofs to the theorems established in the paper and they are helpful for understanding the core theoretical contribution in the study. Relation To Broader Scientific Literature: The main contribution in this paper is closely related to the study of adversarial robustness, neural network training dynamics, and implicit regularization in deep learning. Essential References Not Discussed: None Other Strengths And Weaknesses: The most significant weakness is the lack of empirical validation on real-world datasets. The theoretical results are developed within the framework of an Orthonormal Gaussian Mixture Model (GMM), which is a well-structured and mathematically convenient setting but does not necessarily reflect the complexity of natural datasets. Without experiments on standard benchmarks such as CIFAR-10, MNIST, or ImageNet, it remains unclear whether the proposed method would work effectively in practical scenarios where data distributions are more intricate and do not satisfy the strict assumptions of the orthonormal GMM. The initialization requirements also present a potential weakness. The analysis relies on a non-degenerate initialization with a positive gap to ensure that neurons align correctly with the cluster structure of the data. However, this condition is not naturally satisfied by random initialization, meaning that additional interventions may be needed to ensure that gradient flow leads to a robust classifier. This assumption is acknowledged in the paper, but there is little discussion on how one might relax it or adapt standard training methods to meet these conditions in practice. As well, the requirement that the data follows a high-dimensional mixture of orthonormal Gaussians is a strong assumption that may not hold in many real-world applications. This limits the generalizability of the results Other Comments Or Suggestions: I would suggest the authors conduct experiments on real-world datasets such as CIFAR-10, MNIST, or ImageNet, which may significantly enhance the paper’s practical relevance. In addition, this paper could also benefit from a comparative analysis with adversarial training methods, such as PGD adversarial training and randomized smoothing. Comparing with the robust training baseline can consolidate the adversarial robustness claim made in this work. The paper shows that ReLU networks fail, yet pReLU networks succeeds in mitigating adversarial attacks. How about using other variants of ReLU, such as GELU, or leaky ReLU ? Could they yield similar robustness properties ? I also suggest the authors discuss potential extensions beyond the Gaussian mixture model. The structured nature of the data model is useful for theoretical analysis, but adversarial robustness is often studied in much more complex, high-dimensional, and non-Gaussian settings. Exploring whether similar guarantees can be derived for broader classes of data distributions can make the findings more broadly applicable. Questions For Authors: Please check the suggestions made. Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the valuable comments and suggestions. Here are our responses to your questions: **Experiments on real datasets**: As we stated in Remark 2 (and we will expand it as per reviewer MTUy's suggestion), we focus on developing theoretical results in this paper, and the related experiments on real datasets (MNIST, CIFAR) have been presented in prior works. Nonetheless, we will add experiments on synthetic data that validate our Theorems (See our rebuttal *"Experiments: degenerate v.s. non-degenerate initialization"* to the reviewer hboM). Lastly, the comparison with adversarial training suggested by the reviewer is an interesting future research topic. **Other choices of activation functions**: If one uses GELU or Leaky-ReLU, the trained network has a similar level of robustness to those (ReLU, linear, Tanh) that are non-robust. We are happy to add their corresponding plot to Figure 2. **Beyond Guassian models**: We will add a concluding remark discussing future work. The future work will be along the lines of extending the current analysis to more complex data models, for example, union of low-dimensional subspaces. We hope our rebuttal addresses your questions. If you have additional questions/concerns, please feel free to post them during the discussion phase.
Summary: A common concern in the design of deep learning systems is their susceptibility to imperceptible noise. The authors approach this problem from the angle of finding the maximum adversarial perturbation tolerated by a neural network, without needing adversarial training. It is clarified that this often conditions on the data geometry, since the proof of a robust classifier depends on a sufficiently large margin between each class-conditional cluster in the desired classification space. The desire is to provably find a robust classifier while maintaining clean accuracy. The submission proposes theoretical proof that on specific data models, the maximum robustness any classifier can achieve is based on the separation between the class-conditioned probability mass. Although the data distribution follows specific structural patterns that enable learning a robust classifier, the ability to learn the robust classifier by gradient descent is not viable unless using a polynomial ReLU (pReLU) proposed in prior work. Based on a balanced gaussian mixture model, the authors first show that a nearly optimal classifier can be constructed from nearest-cluster rules. Unfortunately, gradient descent must derive classification rules over the trajectory of optimizing weights, which makes learning the robust classifier more difficult. With bounded network parameter initialization, the main result tries to prove the conjecture provided in prior work, which states that gradient descent on a pReLU network with $p > 2$ eventually converges to an adversarially robust network in the radius close to $\frac{\sqrt{2}}{2}$. This is said to hold for an arbitrary classifier $f$ within a specific distance measure (proposition 3). The result assumes: a balanced data sampling of the GMM (Assumption 1), and given the definition of non-degeneracy gap using a union of Voronoi regions and void regions, there is a limit on where individual neuron weights may initialize given the weight's angle to nearby cluster centers (Assumption 2). ## Post-rebuttal update Thanks to the authors for a detailed response and new experiments, which will improve the next version. The main pain point remaining is the layout and organization of the main text, which is difficult to remedy in a single review cycle without a major revision, so I will keep my original score. Claims And Evidence: - The main proof is structured similar to prior work studying GF dynamics - the GF problem is split into an alignment phase and a convergence phase. The alignment of neuron weights can be bounded given the assumptions on initialization and non-degeneracy. The convergence can be checked based on monotonic growth of the neuron norm in Voronoi regions and small bounded amount in void regions. The proof seems reasonable and an iterative step over prior work. Methods And Evaluation Criteria: The authors provide some small experiments using synthetic data in Appendix B, although they mainly relate to comparison of regular polynomial ReLU and pReLU. Theoretical Claims: I did not see any major errors with the proofs. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: - The proposed theorems may have implications for prior work which offered conjectures on the convergence properties of pReLU-based 2-layer neural networks. - Future work may extend the results to different classification losses, since the current format assumes a specific loss (due to difficulty in characterizing neuron movement during training). Essential References Not Discussed: N/A Other Strengths And Weaknesses: - The convergence proofs would be a useful extension for the conjectures proposed in prior work. - The submission mainly suffers from structural and organization issues. The main theorem is not presented until page 6. The introduction runs until page 3 and contains information that is not necessarily needed for introducing the problem domain (e.g., defining GMM, informal versions of proofs). Most of section 2 could likely be moved to an appendix since it relates to the idealized Bayes classifier with cluster center rules. The main text ends abruptly, but should ideally devote more space to synthetic experiments to validate assumptions, closing remarks, and future work. - The main results are interesting but I was expecting to see some verification of assumptions (1 and 2) on small-scale synthetic data, e.g., demonstration of the effect from degenerate and non-degenerate initializations, or when initialization places neurons close to region borders. It would've been interesting to check the classifier training which obeys Assumption 1 but not Assumption 2 (non-degeneracy shape). It makes sense to need a specific placement of neuron weights relative to cluster centers for the convergence analysis, but it wasn't clear how necessary this is during practical implementation. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the valuable comments and suggestions. Here are our responses to your questions and concerns: **Structure of this paper**: We have shared our view on the current structure of our paper in our rebuttal *"Organization/presentation of this paper"* to reviewer MTUy, and we will make revisions to our manuscript to further improve its presentation and clarity. Regarding your specific suggestions, we will add numerical experiments to the manuscript (please see below) and a concluding remark discussing future work. The future work will be along the lines of extending the current analysis to more complex data models, for example, union of low-dimensional subspaces, and possibly extending to the transfer learning setting (Please see our rebuttal *"Practical implication of our results"* to reviewer MTUy) for end-to-end robustness certificates, when provided some properties of the feature extractor. **Experiments: degenerate v.s. non-degenerate initialization**: Great suggestions. We have conducted numerical experiments on synthetic GMM data to illustrate how this (non)degeneracy of the initialization affects the convergence of GD. These experiments validate our Theorem 1, and also show that GD still converges under random initialization (which does not satisfy our Assumption 2) except with a slightly slower alignment between neurons and cluster centers, which agrees with our discussion in our remark "Limitation of our current result." We will add these experiments to the revised manuscript. We describe these numerical experiments in detail: we consider, as per the reviewer's suggestion, two types of initialization: - *Random initialization*: the initialization follows Assumption 1, with entries of initialization shape $w_{j0},j=1,\cdots,h$ all i.i.d. samples from standard Gaussian. As we discussed, this does not satisfy our Assumption 2 in high-dimension scenarios. - *Non-degenerate initialization*: We nudge the above random initialization shape toward cluster centers to increase its non-degeneracy gap. Specifically, for every $j$, we let $w_{j0}\leftarrow w_{j0}+\delta \cdot(\mu-w_{j0})$, where $\mu$ is randomly uniformly selected from one of the cluster centers $\mu_k,k=1,\cdots,K$. The resulting new initialization shape has a non-degeneracy gap of roughly $\delta$, thus satisfying Assumption 2. We also adjust the $|v_j|$ accordingly so the initialization satisfies Assumption 1. - The initialization scale is $\epsilon=1e-7$ We run GD with step size $0.2$ on a synthetic GMM dataset of size $n=5000$ with $D=1000, K_1=5, K_2=5,\alpha=0.1$, and keep track the following: - *Alignment*: The alignment measures we are interested in are $\max_k\cos(\mu_k,w_j), j=1,\cdots,h$, and our Theorem 1 and its proof suggests that they converge to close to 1 after sufficient training time. For clarity, we report their mean value and the 1st quantile. - *Loss*: The mean square loss. - *Distantance to $F^{(p)}$*: The quantity $\sup_{x\in S^{D-1}}|f^{(p)}(x;\theta)-F^{(p)}(x)|$. We estimate this quantity by randomly sampling large batches of $x$ and running projected gradient ascent on this distance. We summarize the findings 1. GD converges and finds $F^{(p)}$ in both cases, thus verifies our Theorem 1; 2. The alignment between neurons and cluster centers is slower for random initialization than with the nudged initialization with $\delta=0.05$ non-degeneracy gap. This agrees with our discussion in "Limitation of our current result": having some non-degeneracy gap skips a "burn-in" phase for the neurons' directional dynamics, from the following tables: |GD, random init.||||||||| |---|---|---|---|---|---|---|---|---| | Iterations| t=0| t=50 | t=100 | t=150 | t=200 | t=300 | t=400|t=500| | Alignment (mean)| 0.05 | 0.06 | 0.14 | 0.29 | 0.47 | 0.69 |0.80 | 0.80| | Alignment (1st quantile)|0.04| 0.06 | 0.04 | 0.04 | 0.05 | 0.14 |1.00 | 1.00| | Loss | 1.00| 1.00 | 1.00 | 1.00 | 1.00 | 9.99e-1 | 4.97e-1|1.30e-4| | Distance to $F^{(p)}$| 5.33e-1| 5.20e-1 | 5.25e-1 | 5.14e-1 | 5.21e-1 | 5.21e-1 | 5.05e-1|3.89e-5| |GD, non-degenergate init. $\delta=0.05$||||||||| |---|---|---|---|---|---|---|---|---| | Iterations| t=0| t=50 | t=100 | t=150 | t=200 | t=300 | t=400|t=500| | Alignment (mean)| 0.07 | 0.12 | 0.34 | 0.58 | 0.77 | 0.92 |0.95 | 0.95| | Alignment (1st quantile)|0.05| 0.05 | 0.08 | 0.14 | 0.48 | 1.00 |1.00 | 1.00| | Loss | 1.00| 1.00 | 1.00 | 1.00 | 1.00 | 9.92e-1 | 1.94e-1|9.15e-5| | Distance to $F^{(p)}$| 5.33e-1| 5.20e-1 | 5.25e-1 | 5.14e-1 | 5.21e-1 | 5.18e-1 | 4.80e-1|1.57e-5| Note: 1) for the alignment, $1.00$ is a rounded value; 2) by the end of the training, "GD, random initialization" has a lower mean alignment because it has more neurons in the void regions that do not receive GD update during training. We hope our rebuttal addresses your concerns regarding the organization of our paper. If you have additional questions/concerns, please feel free to post them during the discussion phase.
Summary: This paper presents new theoretical findings regarding the feasibility of achieving robustness without adversarial training. Specifically, the paper focuses on a specific data model: a mixture of Gaussian distributions whose cluster centers (i.e., mean vectors of each Gaussian distribution) are orthonormal vectors. The paper first proves that $\ell_2$-robustness cannot be achieved for a particular attack radius. However, for smaller attack radii, the adversary’s attack success probability can be lower-bounded for the Bayes optimal classifier, implying the robustness of the Bayes optimal classifier for the given data model. Then, the paper presents its theoretical achievement that, for the given data model, it is feasible to train a pReLU network without adversarial training with a proper assumption on the parameter initialization. ## Update after rebuttal I appreciate the authors' rebuttal. However, I don't think that the rebuttal properly addressed my concern on paper organization and I can also see that I'm not the only reviewer who concerned about the paper writing as a major issue of the paper. I'll keep my initial rating of 2. Claims And Evidence: All the claims contain proofs in appendices. Methods And Evaluation Criteria: This paper neither proposes a new method nor evaluates it. Theoretical Claims: I read most of the main body, including the proof sketch. The proof idea in the sketch makes sense. Experimental Designs Or Analyses: This paper presents purely theoretical findings and does not contain any experiments. Supplementary Material: This paper contains a massive amount of appendices, and I’m interested in checking those proofs. However, I did not have enough time to read the appendices during the review period. Relation To Broader Scientific Literature: This paper contains an interesting finding that adversarial robustness is achievable without adversarial training for a specific data model with some assumptions. However, the impact of the findings is unclear to me. Essential References Not Discussed: The paper cited the needed references well. Other Strengths And Weaknesses: ### Strengths 1. To the best of my knowledge, the presented theoretical results are novel. 2. We still lack a theoretical understanding of adversarial training, and this paper adds some new findings to the theory. ### Weaknesses 1. The paper is not well-organized, which makes it harder to understand. 2. The theoretical findings are interesting, and the paper contains many findings. However, the practical implications of those findings are unclear. In Remark 2, the paper referred to other papers to find such practical implications of the problem. (Even this should be discussed separately in a section discussing related works.) However, those papers do not explain the message the reader should get from **this paper**. Other Comments Or Suggestions: 1. Please organize the paper's contents to have a better structure as an academic paper. For example, you can present the summary of the theoretical findings first, then a separate proof sketch section, and then another section containing additional content, such as a comparison to prior work and limitations. Also, please add a conclusion section that summarizes the paper's achievements. 2. Please be more selective about what you present in the main body of the paper. For example, while Propositions 1 and 2 are interesting theoretical statements, I don’t think they have more significant value than the results in Section 3. Moving those Propositions to the Appendix and discussing the results in Section 3 more would be better. 3. Be clear about the message the paper wants to deliver to the ML community. The paper proves some interesting theoretical statements, but the reason why the findings are significant is unclear. Questions For Authors: 1. Regarding the impact of this paper, Remark 2 refers to other papers about the practical implications of the problem setting. Please briefly explain the practical implications to me. (In my opinion, this should be explicitly written in the paper.) 2. What do the paper's findings imply related to the practical value of the problem? Specifically, how can those findings benefit the ML community? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the valuable comments and suggestions. Here are our responses to your questions and concerns: **Organization/presentation of this paper**: Although this is a concern raised by two reviewers (MTUy and hboM), we do not think there is any major issue with the organization of our manuscript. In fact, Reviewer KSsi finds the paper "extremely well-written". It is our view that the disagreement comes from the fact that in the main sections after the introduction, we put more emphasis on the interpretation of our assumptions and results (and Reviewer KSsi thinks they are "interesting", "very nice, and enlightening"), and relatively less emphasis on certain aspects (message to the ML community, practical implications). We thank reviewers MTUy and hboM for pointing out those issues. We address them in the rebuttal, and they can be easily incorporated into our current manuscript. **What message should one get from this paper?** As a theoretical paper, we believe our messages are clearly stated. - The paper title summarized our theoretical contributions; - Our message appeared in the abstract: "This paper shows that for certain data distributions, one can learn a provably robust classifier using standard learning methods and without adding a defense mechanism"; - We outlined our messages in the introduction section in detail: (0) We aim to answer whether standard training methods can find a robust classifier without adversarial examples. We show that for GMM data, (1) data concentration ensures the existence of robust classifiers, and class separation determines the maximum achievable robustness; However, (2) GD on shallow networks can not find a robust classifier unless the activation is carefully chosen, and we show the GF probably finds a robust classifier with appropriate choices; (3) These results highlight that finding robust classifier by standard training is possible but requires a joint understanding of the data structure and the training dynamics of GD algorithms. Based on the reviewers' comments, we feel their concern arises because, after the introduction, the main body of the paper emphasizes the technical presentation of our results. To resolve this issue, we will remark on how our results reflect these messages in the main sections. **Practical implication of our results** We will expand Remark 2 to address the practical implications of our results. We explain them here: In the case of transfer learning, one is given a pretrained feature extractor and aims to train a shallow network classifier. Our theorem provides guidance in obtaining robust shallow network classifiers with standard training for tasks with coarse labels. By tasks with coarse labels, we mean that each class includes many sub-classes, but the label only reveals the class membership. For example, when classifying images of cats and dogs, the dog (cat) class may include different breeds. Notably, many feature extractors, like CLIP, are trained on a massive dataset that contains subclass information and thus can "distinguish" subclasses. That is, the extracted feature from different subclasses has nearly an orthogonal GMM structure (verified for CLIP embeddings of CIFAR in Li et al. 2024). Therefore, classifying these extracted features only with coarse class labels is close to the learning problem we entertained in this paper, and our theorem sheds light on how to train the shallow network with appropriate activation for robustly classifying these features. We note that here, we are addressing the robustness of the classifier against perturbations in the extracted feature; how robust the end-to-end model is also depends on the property of the feature extractor. **How our findings benefit the ML Community**: Our contributions are: - (High-level) We show that for certain data distributions, one can learn a provably robust classifier using standard learning methods and without adding a defense mechanism. - (Theoretical/Technical) Our main theorem introduces the analysis of GF on shallow networks with new activation functions, and reviewer KSsi thinks "the analysis methods developed here may serve as the foundation for future work to analyze more complicated (and more practical) settings." - (Practical) We have expanded our remarks on the practical implications of our results in this rebuttal and will add them to the revised manuscript. **Other additions**: We will add a conclusion section based on the responses above. Moreover, we will add numerical experiments to validate our theoretical results and remarks on future work (Please see our rebuttal to reviewer hboM). We hope our rebuttal addresses your concerns regarding the organization of our paper. If you have additional questions/concerns, please post them during the discussion. Reference: Li, B., Pan, Z., Lyu, K., and Li, J. Feature averaging: An implicit bias of gradient descent leading to non-robustness in neural networks. arXiv preprint arXiv:2410.10322, 2024.
Summary: This paper analyzes the gradient flow dynamics of training a pReLU neural network. It is shown for data coming from Gaussian mixture models with orthonormal cluster centers that, under some technical initialization conditions, the dynamics converge to a particular pReLU model that acts similar to a nearest-cluster rule, with a known (and high) level of robustness. Claims And Evidence: The claims are all theoretical, and are well-supported by clear and rigorous mathematical analyses. Methods And Evaluation Criteria: N/A Theoretical Claims: The mathematical analyses appear to be rigorous and correct. Experimental Designs Or Analyses: There are no experiments. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper is somewhat niche, with impacts primarily in the area of mathematical analysis of neural network training. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is extremely well-written. The content is highly technical, but the authors do a good job at shedding light on what the technical conditions mean intuitively (for instance, through geometric descriptions). The problem setup is definitely impractical in terms of the assumptions being made, as the authors themselves have acknowledged. However, in my opinion, this does not necessarily detract from the contributions and possible impact of the paper; the paper's goal is to conduct a theoretical analysis of a gradient flow that is in general intractable to characterize for complicated data distributions and initialization schemes. So, appropriate assumptions are employed and clearly stated in order to reduce the analysis down to a tractable one, enabling the authors to attain their goal (and to prove a previously open conjecture). Such simplified analyses can still offer intuitive insights into what we may expect for gradient-based learning in the cases where some of the assumptions are relaxed. Furthermore, the analysis methods developed here may serve as the foundation for future work to analyze more complicated (and more practical) settings. That said, I do have a few questions and concerns that I would like to see addressed. See below. Other Comments Or Suggestions: 1. Line 98 and Figure 1: Why would those two Gaussian clusters concentrate on D-1 dimensional affine subspaces rather than on spheres? 2. Line 119, Column 2: It seems odd to restrict your attack model to a sphere of radius $r$ rather than a ball of radius $r$, which is the more common method for modeling additive adversarial attacks. That is, why are you using $\min_{\|d\| = 1}$ rather than $\min_{\|d\|\le 1}$? With your model, is it not possible to potentially find some large radius $r$ for which the model is robust, even though there may exists some smaller radius $r' < r$ at which an attack results in misclassification? 3. The interpretation of the Bayes optimal classifier as a nearest cluster rule (with some error term) is very interesting. 4. Notation in Proposition 3: By $x\in S^{D-1}$, do you mean the $D-1$ dimensional unit sphere? If so, it would be good to explicitly define this notation for the reader. 5. Notation: Typically set minus is denoted with a backslash ("\setminus" in LaTeX), not a forward slash. 6. Typo in Assumption 2: "The initialization has s"... there seems to be an extra floating letter "s". 7. Overall, I find the geometric descriptions of the rather technical initialization conditions in Section 3.2.1 to be very nice, and enlightening. 8. On the takeaway message of Theorem 1, and discussion "Nearly optimal robust classifier via GF": Does your analysis guarantee that GF will not suddenly diverge away from the robust baseline $F^{(p)}$ once the flow time increases beyond $T^*$? It seems to me like this might be a possible limitation. If this is the case, how would one know when to stop the gradient flow process to "capture" the right set of parameters giving maximal robustness? 9. Your analysis in Theorem 1 seems to be saying that GF will result in $f^{(p)}$ approaching $F^{(p)}$, and then claiming robustness of $f^{(p)}$ from that of $F^{(p)}$. If $F^{(p)}$ is such a desirable baseline for robust prediction, why wouldn't one simply use $F^{(p)}$ for prediction in the first place? What is one gaining by learning $f^{(p)}$? 10. Footnote 3: This footnote should be placed after the punctuation, to avoid looking like a mathematical "cubed" symbol. Questions For Authors: See above "Other Comments Or Suggestions." Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the valuable comments and suggestions, and for the encouraging words acknowledging the strengths of our manuscript. Here are our responses to your questions and concerns: **Concentration of Gaussians**: The claim that a Gaussian $\mathcal{N}(\mu,\frac{\alpha^2}{D}I)$ concentrates around a $(D-1)$-dimensional affine subspace is derived from the fact that (1) as the reviewer pointed out, it concentrates around a sphere, and at the same time, (2) one can cover most masses of this high-dimensional sphere by any set $S$ as long as $S$ is a Minkowski sum of a $\mathcal{O}(1/\sqrt{D})$-radius ball and a $(D-1)$-dimensional affine subspace that contains the Gaussian mean $\mu$ (one should think $S$ being the inflated version of the $(D-1)$-dimensional affine subspace with thickness $\mathcal{O}(1/\sqrt{D})$); This geometric property of high-dimensional sphere is discussed in the section 2.2 of the book from (Hopcroft and Kannan). Therefore, most masses of this Gaussian $\mathcal{N}(\mu,\frac{\alpha^2}{D}I)$ is within $\mathcal{O}(1/\sqrt{D})$ eculidean distance to the chosen affine subspace, from which we say in line 101, "the class-conditioned probability masses concentrate around two (D−1)-dimensional affine subspaces". We hope this clarifies and are happy to add this explanation to the revised manuscript. **Attack radius**: The $\|d\|=1$ in line 119 is a typo. Thank you for pointing it out. We have $\|d\|\leq 1$ in all other places. **Stop time in Theorem 1**: This is a technical limitation of our analysis. In Appendix A, line 592, we acknowledged this limitation in the remark "Analysis until finite time $T^*$", and we referred to some previous work on local convergence of GF, for example (Chatterjee, 2022), which can potentially be utilized to show that the weights stay around the neighborhood around where they are at time $T^*$ for the rest of the GF because the loss is already close to its global minimum (thus no sudden divergence). **Why not use $F^{(p)}$ directly**: Our aim is to theoretically understand how a neural network can learn a robust classifier by standard training. Although $F^{(p)}$ can be easily constructed, it is unknown whether a network can learn $F^{(p)}$, and our theorem highlights that provable learning $F^{(p)}$ with networks requires some specifical choice of activations. We also thank the reviewer for pointing out typos and notation issues in our manuscript. We will fix them in the revision. We hope our rebuttal addresses your concerns. If you have additional questions/concerns, please feel free to post them during the discussion phase. Reference: Hopcroft, J. and Kannan, R. Computer Science Theory for the Information Age. Chatterjee, S. Convergence of gradient descent for deep neural networks. arXiv preprint arXiv:2203.16462, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough responses and clarifications. I encourage you to incorporate your explanations for the concentration, the stop time, and the motivation for studying the training dynamics (and not necessarily construction of a new or better model), in your revisions. Overall, I find this to be a very nice theoretical paper. I increase my score to 4 (accept).
null
null
null
null
null
null
video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language Model
Accept (poster)
Summary: This paper introduces video-SALMONN-o1, the first open-source reasoning-enhanced audio-visual LLM designed to address the underexplored challenge of general video understanding, which requires complex multimodal (audio-visual-text) reasoning. Current reasoning-optimized LLMs focus narrowly on mathematical/textual tasks or image inputs, limiting their applicability to real-world video scenarios like academic presentations or synthetic video detection. The authors propose a two-pronged approach: (1) a reasoning-intensive dataset with step-by-step solutions for supervised fine-tuning, and (2) process Direct Preference Optimization (pDPO), a novel training method that optimizes reasoning paths via contrastive step selection, eliminating the need for external reward models. They also introduce RivaBench, a benchmark with 4,000+ expert-curated QA pairs spanning diverse video contexts. Results show 3–8% accuracy gains over visual baselines (e.g., LLaVA-OneVision), 6–8% improvement from pDPO on RivaBench, and novel zero-shot synthetic video detection capabilities. ## update after rebuttal Thanks for response. My concerns have all been solved. Claims And Evidence: Yes, they are well supported. Methods And Evaluation Criteria: Yes, they do. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, I checked the experimental designs and analyses in Sec. 6. Supplementary Material: N/A Relation To Broader Scientific Literature: The reasoning-enhanced training strategy can be applied to broad video tasks. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strengths - The technical contribution is solid, and the conducted experiments are convincing. - The provided experimental results clearly show the effectiveness of the proposed method. ## Weaknesses - Unclear inference mechanics: The paper lacks details about the inference pipeline. Please elaborate more on this. For example: - How many reasoning paths are generated per video-question pair during testing? - Is the reward model (or pDPO’s contrastive step selection) applied at inference time, and if so, how does this impact latency? - Limited generalizability validation: While focused on audio-visual tasks, the proposed pDPO and reasoning dataset appear applicable to vision-only LLMs (e.g., LLaVA-OneVision). Testing these components on other architectures would strengthen claims about methodological universality. Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply appreciate Reviewer dh9B for the positive comments and acknowledgement of our contribution. We would like to address the following questions: 1. __Inference mechanics__: - We use greedy decoding during inference. - We use contrastive step selection only to construct training preference pairs for pDPO. Hence no impact on latency. 2. __Generalizability validation__: - While focusing on audio-visual tasks, we also present results on visual only tasks including NeXT-QA and SynthDec partition throughout the paper (Table 2-4). --- Rebuttal Comment 1.1: Comment: Thanks for response. My concerns have all been solved.
Summary: The paper introduces video-SALMONN-o1, an open-source audio-visual LLM enhanced for general video understanding. It proposes pDPO for reasoning optimization and RivaBench, a new benchmark. The model shows improved accuracy over baselines and zero-shot synthetic video detection capabilities. However, the benchmark's reliance on LLM-generated data raises concerns about accuracy and bias. Claims And Evidence: The claims in the submission lack clear and convincing evidence. The benchmark results may be unreliable due to potential hallucinations and biases in the large models used to create them. Additionally, the authors fail to compare their model with true SOTA models like Gemini 1.5 Pro[3], Qwen2-VL[1], and GLM-4V-Plus[2], undermining the validity of their claims about achieving SOTA performance. - [1] Wang, Peng, et al. "Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution." arXiv preprint arXiv:2409.12191 (2024). - [2] Hong, Wenyi, et al. "Cogvlm2: Visual language models for image and video understanding." arXiv preprint arXiv:2408.16500 (2024). - [3] Team, Gemini, et al. "Gemini: a family of highly capable multimodal models." arXiv preprint arXiv:2312.11805 (2023). Methods And Evaluation Criteria: The proposed methods, including the reasoning-enhanced audio-visual LLM and pDPO, are relevant for general video understanding. However, the evaluation benchmark RivaBench is generated by LLMs, which may introduce biases and inaccuracies due to hallucinations. Overall, while the methods are sensible, the evaluation criteria have significant limitations. Theoretical Claims: The paper does not present any formal theoretical proof. It focuses on empirical results and methodological contributions, such as the introduction of video-SALMONN-o1 and pDPO. Therefore, there are no proofs to verify. Experimental Designs Or Analyses: I checked the experimental designs and found issues, particularly with the benchmark construction. The RivaBench was generated using large language models, which can introduce hallucinations and biases, potentially compromising its validity. Additionally, the authors did not compare their model against true SOTA models like Gemini 1.5 Pro, Qwen2-VL, or GLM-4V-Plus, which raises concerns about the robustness of their claims. Supplementary Material: No supplementary material was provided. Relation To Broader Scientific Literature: No broader impact. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: The paper introduces video-SALMONN-o1, a novel open-source audio-visual LLM with enhanced reasoning abilities, and proposes pDPO for efficient step-level reward modeling. It also introduces RivaBench, a reasoning-intensive benchmark. Weaknesses: The benchmark relies on LLM-generated data, which may introduce biases and hallucinations. The paper lacks comparisons with true SOTA models like Gemini 1.5 Pro and Qwen2-VL, potentially overstating its achievements. Supplementary materials are absent, limiting reproducibility. Other Comments Or Suggestions: 1. Include supplementary material to support claims and provide detailed methodology. 2. Validate benchmark results with human annotations to mitigate model biases and hallucinations. 3. Compare with true SOTA models (e.g., Gemini 1.5 Pro, Qwen2-VL, GLM-4V-Plus) for a fair evaluation. Questions For Authors: How do you address potential biases and hallucinations in the benchmark data generated by LLMs? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and would like to resolve concerns and misunderstandings as follows: 1. __Regarding the reliability and bias of the benchmark__: - As stated in section 5 paragraphs 2 and 3, we __always__ use human annotators to generate questions and answers, and __always__ use human annotators to check and validate the questions and answers. - We employed over 50 __expert__ annotators to perform this task. This way, we ensure diversity and minimize bias and hallucination to a level that is __above average human beings__. - Please refer to the "__RivaBench Demo Samples__" section of the demo page in the paper for concrete examples of human-created questions: - One medical example requires professional medical knowledge. The explanation reflects the annotator's knowledge. - One math example requires the ability to understand parabolic partial differential equations. See the detailed step-by-step derivation provided by the annotator. - All samples are accompanied with such expert explanations and will be released upon acceptance. 2. __Comparisons with other SOTA methods__: - We have compared LLaVA-OneVision in Table 2 as a strong open-source video understanding model baseline. - We included the performance of Qwen2-VL as follows: | Models | VideoMME | NeXT-QA | Academic | StandUp | SynthDec | | -------- | ------- | ------- | ------- | ------- | ------- | | Qwen2-VL | 62.9 | 80.2 | 48.2 | 71.6 | 0.0 | | video-SALMONN-o1| __65.6__ | __82.3__ | __48.3__ | __76.7__ | __17.8__ | - video-SALMONN-o1 achieved superior performance on all the benchmarks. The improvement on the academic partition is smaller because this data needs __less__ audio-visual understanding but __more__ mathematical reasoning. However, pDPO focuses more on optimizing audio-visual understanding in the reasoning process. - We have also included the results for Gemini-1.5-pro in Table 2. Note that Gemini-1.5-pro is a proprietary LLM with unknown (potentially much larger) model sizes and unknown inference mechanism, and hence no direct comparison should be drawn with it. Given that, our model __still outperforms__ Gemini-1.5-pro on NeXT-QA and StandUp.
Summary: This paper introduces video-SALMONN-o1, an open-source reasoning-enhanced audio-visual LLM designed for general video understanding tasks. The authors claim that existing reasoning models are merely focusing on either math problems or visual graphical inputs, without sufficient attention on general audio-video understanding. To fill this gap, they proposed video-SALMONN-o1 to tackle complex reasoning problems in the field of audio-video understanding. The main contributions include 1) a reasoning-intensive dataset featuring challenging audio-visual questions with step-by-step solutions, 2) a novel process direct preference optimization (pDPO) algorithm, and 3) RivaBench, the first reasoning-intensive video understanding benchmark, with over high-quality, expert-curated question-answer pairs across diverse scenarios. Extensive experiments demonstrate the effectiveness of the proposed scheme. Claims And Evidence: The key claim of this paper is about the state of existing reasoning-enhanced MLLM. The authors claim that existing models merely focus on narrow scenarios such as math, coding, or simple visual graphic inputs, while the exploration in general video understanding is missing. The above claim is correct and reasonable, as the community is still exploring the proper reasoning paradigms for video understanding. Methods And Evaluation Criteria: Yes. The proposed method for reasoning-enhanced audio-visual understanding is reasonable. The interleaved synchronization design can better handle the multimodal audio-visual data. The evaluation metric for RivaBench is accuracy (MCQ or Yes/No), which is a common standard in evaluating MLLMs. Theoretical Claims: The paper does not contain any proofs or theoretical claims. Experimental Designs Or Analyses: The experiments were conducted on both public benchmarks for general video understanding and the proposed RivaBench. Results on these benchmarks can well-demonstrate the effectiveness and the significance of the proposed scheme. Supplementary Material: Yes. The authors provided more examples, prompt templates, visualizations, and case studies in the appendix. Relation To Broader Scientific Literature: The key contribution of this paper is to generalize the reasoning capabilities of LLMs/MLLMs to video understanding, especially in the case when both audio and visual information are used. It diverses this work from existing publications. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Generally, this is a good paper about LLM-based & reasoning-enhanced video understanding. Extending existing paradigms to audio/video understanding is a non-trivial setting. The proposed video-SALMONN-o1 model can well-handle such scenarios. Other Comments Or Suggestions: N/A Questions For Authors: The authors are encouraged to open-source the code and data mentioned in the submission, which would be a good contribution to the community. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply thank you for acknowledging our effort and contribution! --- Rebuttal Comment 1.1: Comment: Thanks for the response from the authors. I'm keeping my original rating.
Summary: In this work, an RL-based optimization and reasoning-aware framework is proposed for training a large audio-video multi-modal model called Video-SALMONN-o1. This work emphasizes that significant effort has been invested in improving the mathematical and visual graphical inputs from the RL perspective, leading to the introduction of RL-based approaches for video-understanding models. To achieve this, a new step-by-step reasoning-based video-LLM training dataset has been proposed. This dataset is used to perform reasoning-based Supervised Fine-Tuning (SFT) and RL fine-tuning stages. For the second stage, a combination of PPRM and process DPO objectives has been formulated to improve both solution-level and fine-grained step-level preference training. To enhance training efficiency in the pDPO method, contrastive step selection is utilized, which applies optimization to steps that are most sensitive to video-based changes. The proposed model is trained using multiple stages, including SFT and the RL stage. To further benchmark audio-video models, a new benchmark called RivaBench is introduced, which effectively evaluates the model's reasoning and audio-encoding capabilities. After complete training, Video-SALMONN-o1 is evaluated on several benchmarks, and ablation studies are performed to justify the various optimization design choices proposed in the framework. ## update after rebuttal Dear Authors, Thank you for providing a rebuttal response. The majority of my concerns have been sufficiently addressed. I strongly recommend the authors incorporate all discussions and additional experiments presented in the rebuttal period into the main manuscript. I will keep my original rating. Claims And Evidence: Yes, the main claims made in the submission are supported by extensive empirical benchmarking results and qualitative results (mostly in the supplementary part). The effectiveness of proposed RL-based framework is effective compared to other video-based foundation models. Methods And Evaluation Criteria: Overall, the evaluation criteria make sense for the problem at hand. However, explicitly checking the hallucination performance of the resultant model is not evaluated. There are test-only benchmarks such as [1] where the proposed method should be tested to validate the proposed framework in terms of its ability to reduce hallucinations in video-related tasks. [1] VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models Theoretical Claims: No explicit proofs are proposed in the manuscript. Experimental Designs Or Analyses: I believe there are several aspects of the proposed dataset which are not clearly discussed in the manuscript. **1) Validating the design choices in preparing SFT Dataset:** i) It is unclear what are the initial data source for preparing the reasoning-based version of it. One of the main contributions of this work is to propose a RL/reasoning-friendly dataset. Unfortunately, details on that front are majorly missing. Additionally, discussion and statistics must be provided for various aspects such as QA count, number of videos, video-category distribution, etc. ii) As mentioned in lines 138-140 (right column), what concrete filtering steps are employed to reject lower-quality QA samples? iii) In Tab. 1, the duration for the Academic category has a strange standard deviation of 66.1 seconds, which is larger than the mean itself. **2) Missing details regarding the pDPO dataset**: i) The statistics and curation process of the proposed pDPO dataset has been discussed very late in the manuscript. A dedicated discussion should be added in the manuscript upfront in the initial sections of the manuscript. Instead of too much text, a visual diagram is recommended which outlines the major steps in the pDPO dataset generation process. **3) Model Design Choices:** i) It is unclear exactly what part of the model is frozen and what is trainable in each stage of the training. For example, it seems like most model components are frozen as written in lines 268-270 (right column). **4) RivaBench:** The manuscript has provided a few video examples of the proposed evaluation benchmark. It can be noticed that in most examples, the speech lyrics are directly rendered on the video. This could cause models to overlook the audio tokens and directly read the subtitles from the visual inputs which is a shortcut. How do the authors aim to resolve this issue? **5) Evaluations and Experiments:** i) In my understanding, the proposed Video-LMM is based on video-SALMONN 2 model, but unfortunately, a performance comparison with this model is not performed. **6) Training compute and efficiency comparison**: As the proposed framework utilizes the RL-based pDPO technique, how much extra compute and training cost is incurred compared to the baseline model? A clear compute cost analysis should be provided for the proposed framework. Supplementary Material: I have reviewed the supplementary document listed after the references in the main manuscript. Relation To Broader Scientific Literature: This paper aims to advance the audio-video-language reasoning abilities of foundation models using reasoning and RL based training frameworks. Essential References Not Discussed: RL-based literature for vision-based LLM models is not discussed in the manuscript. It is important that this work provides a related work section for RL-based methods (e.g., [2], [3]) for image domains and contrasts it with the proposed techniques. [2] RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness [3] Direct Preference Optimization of Video Large Multimodal Models from Language Model Reward Other Strengths And Weaknesses: Overall, the proposed training and evaluation datasets, alongside the RL-based optimization framework are motivating and useful for the research community. However, the overall presentation style of the manuscript is not good, too generic, and many important details are missing in the paper. It is strongly recommended to address all the comments mentioned in the earlier sections of the manuscript. Other Comments Or Suggestions: Minor: In line 133, there is a typo error. Questions For Authors: Please refer to the above parts for questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the detailed and constructive reviews provided by Reviewer BoZJ. We would like to address the concerns and suggestions as follows: 1. We follow the evaluation described in VideoHallucer [1] and report the overall accuracy (when the entire pair is correct) for each category as follows: | Model | Object relation | Temperal | Semantic detail | Factual | Non-factual | Overall | | -------- | ------- | ------- | ------- | ------- | ------- | ------- | | Gemini-1.5-pro [1] | 52.0 | 18.5 | __53.5__ | 16.5 | 48.5 | 37.8 | | video-SALMONN-o1 | __63.4__ | __56.4__ | 16.0 | __43.0__ | __55.6__ | __46.2__ | [1] VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models 2. We appreciate the suggestions and hence provide the following descriptions on how we will improve the presentation in the revised version. - __Validating the design choices__: - __The SFT set__: contains 20k videos with 30k QA pairs. Mean duration 97.3s and maximum duration 256.1s. The majority of videos are randomly downloaded general YouTube videos, with 5k videos focusing on talks and presentations. We provide metadata files in the github repo. - __How to filter__: We take the video description, question and answer and feed that into GPT-4o and ask if this question requires complicated logical reasoning. Keep the QA pair if the answer is Yes, and discard if No. - __The large standard deviation of Academic category__: It contains math lectures which may last over 20 minutes, and short conference presentations that only last 1 or 2 minutes. - __pDPO dataset__: The description is provided in Section 6.2 from line 307. The above reasoning-intensive SFT data is used for pDPO. No additional data is generated for pDPO. - __Model Design Choices__: The visual and audio encoders are frozen. The backbone LLM is trained with LoRA (rank=64) and the modality aligner (24M parameters) is trained. - __Subtitle shortcut in RivaBench__: - This only appears in 50% of the StandUp partition and none in the Academic partition. - Even with subtitles, the frame rate of the video is low enough, such that the subtitles are often incomplete as seen by the visual encoder. - The model has to leverage both audio and visual information, and this is reflected by the fact that GPT-4o (visual only) is worse than Gemini-1.5-pro (audio-visual). - __Regarding comparison to video-SALMONN-2__: video-SALMONN-2 is not open-source yet. The proposed model only resembles the model structure and some opensource data. We do not have or use any model parameters from video-SALMONN-2. - __Computational Cost__: The two additional computations are (i) sample generation and (ii) training the model with pDPO. For (i), we used 16 A100 GPUs for 48 hours to generate the number of samples described in lines 317-319. This is a one-off, and we do not need to generate them again. (ii) We use eight A100 GPUs for 24 hours to train pDPO (see lines 270-271). __Note that these are all training time costs, and it does not incur any additional test time costs at all.__ - __Open Source__: We promise to provide model checkpoint, training and inference code, as well as benchmark data upon acceptance of the paper. 3. We will also include the following reference [2, 3] in section 2 of our revised paper:\ [2] RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness \ [3] Direct Preference Optimization of Video Large Multimodal Models from Language Model Reward
null
null
null
null
null
null
Addressing Misspecification in Simulation-based Inference through Data-driven Calibration
Accept (oral)
Summary: The paper details with the important issue of making neural posterior estimation methods more robust against model misspecification. The authors suggest to use a small set of labeled real-world data to calibrate the posterior inference in the face of a misspecified simulator. I am very short on time for ICML reviews. Apologies for my reviews being a bit short. Claims And Evidence: The authors claim that using a small labeled real dataset can help calibrate posterior inference from a misspecified simulator. The experiments are extensive and indeed showcase this behavior. Methods And Evaluation Criteria: The examples make sense indeed. As previously I continue to think that *labeled* real datasets (i.e. real datasets where the true parameters are known), is a rather niche situation. Yet I do acknowledge that there are relevant applications, where we do indeed have such a dataset. Theoretical Claims: No proofs in the paper. Experimental Designs Or Analyses: The experiments and their evaluation looks sensibel to me. Supplementary Material: I don't think there was supplemental material Relation To Broader Scientific Literature: Model misspecification is one of the major issues in SBI. The authors make an important contribution to this literature. Essential References Not Discussed: The authors may want to additionally cite two recent works to increasing robustness for SBI, which both work with *unlabeled* real datasets: https://arxiv.org/abs/2501.13483, https://arxiv.org/abs/2502.04949 They may also check if there are more relevant references about misspecification in SBI in these papers worth citing here as well. Other Strengths And Weaknesses: see above Other Comments Or Suggestions: none Questions For Authors: The authors have previously answered all my questions already during out submission processes. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your review and your recognition of our contribution to the literature on simulation-based inference (SBI) under model misspecification. Regarding your concern about the availability of labeled real datasets, we acknowledge that such datasets are not always accessible. However, as discussed with Reviewer x5KJ, moving away from labeled calibration data in method papers introduces the challenge of defining alternative evaluation metrics. Without access to true parameters, validating the reliability of a method becomes significantly more difficult. That said, we agree that exploring approaches leveraging unlabeled real datasets is an interesting direction for future work. We will incorporate the suggested references on increasing robustness in SBI and check whether they contain additional relevant citations on misspecification. Thank you again for your thoughtful feedback and your time.
Summary: This paper introduces Robust Posterior Estimation (RoPE), a method for addressing model misspecification in simulation-based inference (SBI). Standard SBI algorithms often assume a well-specified simulator, leading to biased posterior approximations when this assumption is violated. RoPE mitigates this problem in scenarios where a small calibration set of real-world observations paired with ground-truth parameters is available. The method first learns a neural embedding via a standard SBI approach (neural posterior estimation, NPE). It then fine-tunes this embedding using the calibration set. Crucially, RoPE solves an optimal transport (OT) problem between the embedded simulated data and the embedded observed data from the calibration set. At inference time, the OT solution is used to obtain the posterior as a weighted sum of posteriors conditioned on simulated data. The paper evaluates RoPE on six benchmark tasks (two existing, four new) and reports performance compared to baselines. ## Update after rebuttal I thank the authors for their detailed rebuttal, which successfully addressed all my questions and concerns. I agree with their proposed handling of the formal SBI baseline description (referencing the appendix). My initial positive evaluation stands, and I believe the paper makes a valuable contribution to the SBI literature. Claims And Evidence: The paper's claims center around RoPE's ability to provide accurate posterior inference in the presence of model misspecification. These claims must be understood within a specific, and somewhat less common, context within the broader field of simulation-based inference (SBI): the availability of a small calibration set consisting of real-world observations paired with corresponding ground-truth parameter values. Many typical SBI applications operate without access to such paired data, even for a small calibration set. While the paper acknowledges this setting (citing examples like cardiovascular system inference and soft sensor development), a more thorough discussion of the limitations this assumption places on RoPE's general applicability would strengthen the work. The claims are therefore best interpreted as statements about performance conditional on the availability of this calibration data. The authors provide new benchmark task, which is positive, but validation with real world applications would have been a stronger support. Beyond the contextual considerations, the claims are generally supported by a combination of theoretical justifications (based on an adapted definition of model misspecification for SBI) and empirical evaluations across several benchmark tasks. This is a minor point, but the clarity of the evidence presentation, particularly in Figure 1, somewhat hinders a complete and immediate assessment of these claims across all tested scenarios. The dense and complex nature of Figure 1 makes it challenging to definitively confirm RoPE's superior performance against all baselines on each specific task without significant effort. This difficulty in visually parsing the results slightly reduces the "convincing" aspect of the evidence, although the underlying data may well support the claims. Methods And Evaluation Criteria: The introduction of four new benchmark tasks is a valuable contribution to the SBI literature, providing standardized testbeds for evaluating methods under model misspecification. However, the chosen evaluation metrics, LPP and ACAUC, while relevant, are less common in the broader SBI literature. Using more established metrics could facilitate comparisons with other SBI methods and improve the interpretability of the results for a wider audience. Specifically, the negative log probability of the true test parameter (NLTP), as described in Lueckmann et al. (2021) (and used e.g., Papmakarios et al. 2016, 2017, Greenberg et al. 2019), would provide a direct measure of posterior accuracy (and is closely related to the LPP used here, but more standard). Furthermore, the ACAUC metric, as implemented, assesses calibration on a per-parameter basis. This approach might miss joint miscalibrations across multiple parameter dimensions. Considering metrics like expected coverage (as used in, e.g., Miller et al., Deistler et al., Hermans et al.) could offer a more comprehensive assessment of calibration, capturing potential dependencies in the posterior uncertainty. While LPP and ACAUC provide some insight, supplementing them with (or replacing them by) NLTP and expected coverage would strengthen the evaluation. Theoretical Claims: I did not thoroughly check all proofs in the supplementary material, focusing instead on the conceptual soundness and experimental validation. The optimal transport formulation appears standard. Experimental Designs Or Analyses: I reviewed the experimental design. The choice of baselines is generally appropriate. However, as mentioned above, Figure 1 is overloaded and makes it difficult to assess the results clearly. A more detailed explanation of the "SBI" baseline in the main text is needed, as are explanations of the MLP's strong performance (despite being based on Gaussian posteriors), and of the performance of RoPE for different tasks. Supplementary Material: Yes, I acknowledge that the appendix contains clarification for the definition of the calibration set and of model misspecification for SBI. I did not read the remaining supplements in detail. Relation To Broader Scientific Literature: The paper builds upon the SBI literature and addresses model misspecification. It cites key works and acknowledges the unreliability of SBI under misspecification. The use of optimal transport is a novel contribution in this specific context. The paper differentiates itself by focusing on the scenario with a small, labeled calibration set. Essential References Not Discussed: No, the adapted definition model misspecification is stated clearly in the introduction and the related work on this topic is discussed in the corresponding section. Other Strengths And Weaknesses: Originality: The paper presents a novel approach (RoPE) using optimal transport. The introduction of new benchmark tasks also contributes to originality. Significance: Model misspecification is a critical challenge in SBI. RoPE offers a potential solution, but its significance is somewhat limited by the reliance on a calibration set, which may not be available in all SBI applications. Clarity: The paper is well-written and well-structured overall. However, the presentation could be significantly improved in terms of figures. Figure 1 is extremely difficult to read and interpret. The lack of a clear concept figure (as shown in Appendix B) makes it harder to grasp the overall approach at the beginning. Explanations for certain results (SBI baseline, MLP baseline, RoPE performance on different tasks) are currently insufficient. Other Comments Or Suggestions: No. Questions For Authors: 1. Figure 1 Overload: Figure 1 is extremely difficult to interpret due to the sheer number of lines, markers, and overlapping information. Could you simplify this figure, perhaps by splitting it into multiple figures, or by using a different visualization strategy? Also, please explicitly state in the caption what constitutes a "good" value for LPP and ACAUC (e.g., "Higher LPP values indicate better performance; ACAUC values closer to zero indicate better calibration"). 2. "SBI" Baseline Explanation: The "SBI" baseline is not clearly explained in the main text. Please provide a more detailed description of how this baseline is trained and evaluated. Specifically, clarify what data is used for training and testing this baseline. 3. MLP Baseline Performance: Why does the MLP baseline perform so well, even though it assumes a Gaussian posterior? Are the true posteriors in the benchmark tasks close to Gaussian, or is there another explanation? 4. RoPE performance variations: It is unclear why the performance of RoPE varies across the different tasks. For example, explain in more details the performance for the SIR example. It was stated that the SIR task might be less misspecified and that NPE thus performs better. However, does this imply that for well-specified use-cases, RoPE will perform worse in general, e.g., be overly underconfident like for the SIR example? E.g., how does RoPE perform on the standard benchmarking tasks defined in Lueckmann et al. 2021 that are readily available in the `sbibm` package? 5. Calibration Set Assumption: The paper relies on the availability of a calibration set with ground-truth parameter values. While this is a valid scenario, it's less common in many SBI applications. Could you discuss the limitations of RoPE in settings where such a calibration set is not available? Are there potential extensions or modifications to RoPE that could address such scenarios? Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the thoughtful and constructive feedback. We appreciate your insights and will use them to improve the final version of our manuscript. Below, we detail the modifications we plan to make in response to your comments. ## Figure 1 is overly dense We acknowledge that Figure 1 contains too much information and will revise it to improve readability. We will use the additional space to split the figure in two. This will allow us to make the individual plots larger and have space to add the method labels directly on the plots, making them easier to parse. We will also implement the suggested changes. ## Clarification on NLTP and LPP We confirm that NLTP and LPP are exactly the same metric. We will make this explicit in the text to avoid confusion. ## ACAUC and joint calibration ACAUC quantifies marginal calibration by measuring the gap between empirical and expected coverage, averaged over confidence levels. However, it does not capture joint calibration, as dependencies between parameters are not explicitly assessed. Alternative dependence-sensitive metrics may require larger test sets to be stable. Given this limitation, we will explicitly discuss the trade-offs of ACAUC in the revised manuscript. ## SBI baseline details There is a formal description of the SBI baseline in the appendix (lines 965-970). Do you think it should be moved to the main text? The SBI baseline is trained and evaluated purely on simulated data. Since RoPE assumes a conditional independence structure, the SBI baseline should provide an upper bound on RoPE’s performance in terms of LPP. ## Performance of the MLP baseline The strong performance of the MLP baseline is likely due to the unimodal nature of the benchmark posteriors, where a Gaussian is a reasonable approximation. However, this baseline only performs well when given the largest calibration budget. For smaller budgets (200 and below), RoPE consistently outperforms it. Extending benchmarking tasks for SBI under misspecification with multi-modal posterior would be an interesting direction for future work. ## Performance variations of RoPE RoPE consistently returns calibrated and informative posteriors, as indicated by its LPP scores being higher than those of the prior. The results in Figure 1 are based on a fixed entropy regularization parameter, but in practice, this parameter would be tuned. With the chosen setting, RoPE can sometimes be slightly underconfident while remaining well-calibrated across all tasks. For Task E, using a smaller regularization value ($\gamma = 0.1$ or lower) resulted in stronger performance than the one we reported in the manuscript (with $\gamma=0.5$). We expect RoPE to match SBI closely on standard benchmarks when $gamma$ is small. Investigating further the effects of $\gamma$, parameter dimensionality, and test set size further would be interesting but probably goes beyond the scope of this work. Finally, we emphasize that when a simulator is well-specified, standard SBI methods should be preferred over RoPE, as RoPE is specifically designed for misspecified settings. ## The need for a labeled calibration set RoPE relies on a calibration set with ground-truth parameters, but as shown in our experiments, it can also be used without the fine-tuning step. In most benchmarks, this "OT-only" version already produces well-calibrated posteriors. However, the labeled test set was crucial in identifying that OT-only was insufficient for Task E. In a misspecified setting, learning a new model is necessary, and empirical data is required to validate its reliability. This justifies our assumption that a small set of real observations could be used for training. We nevertheless agree with you that labeled calibration data is not always available in real-world applications. In such cases, practitioners must define metrics that assess whether OT-only provides sufficient performance. In some applications, a fine-tuning step similar to RoPE’s could still be justified without labeled data, but this would require application-specific validation strategies. We will clarify these points in the final version of the paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal. All my concerns and questions have been addressed. Regarding the formal description of the SBI baseline I think it is fine keeping a pointer to the appendix for a detailed description. Overall, I believe this paper will make a valuable contribution to the SBI literature, which is consistent with my initial positive evaluation. I look forward to seeing the final version.
Summary: Because it is challenging to use simulation-based inference (SBI) under model mis-specification, this paper proposes optimal transport to model the distribution between SBI-simulated data and a set of observed data and then constructing posterior distributions with neural posterior estimation (NPE). It relies on a calibration set {(theta,x_0)} linking theta and x_0 to modify pi(theta|x_s) when observing x_0. The method is demonstrated on a number of simulated and real benchmark datasets. Claims And Evidence: Main Comment p.2 col 1: the 3rd quantity that access is given to is "a small calibration set of labelled real world observations C composed of iid samples from p^*(theta,x_0)". I understand that this is required by the developed method. My question is whether it is actually likely in practice that the samples in the calibration set will really be a) samples from p^*, and b) iiid. (theta,x_0) can quite credibly (and perhaps frequently) not be drawn from the representative population p^*. Following the medical example in para 2 of the introduction, collecting (theta,x_0) can be very expensive. The decision to take these measurements could then credibly be selectively made based on estimates of quantities that are direct functions of x_0 or theta. E.g. (i) we should only measure the most severe cases, or (ii) those cases that will produce decisions/data that will like on some decision boundary (operate or not). What won't happen is that a random, iid decision (coin flip) will be made whether to make the measurement for each presenting patient (which would represent draws from p^*). My argument here is that the (theta,x_0) calibration set won't be iid draws from p^*, but will be draws from some distribution p^** (say) that is dominated by (has a smaller domain than) p^*. In which case, if you assume the calibration set are draws from p^*, when you actually observe data from a patient who lies comfortably outside the observed domain of p^*, your modelling will go awry. So while I like the paper as written, I think it would be healthy to: * Understand how one might practically handle this (p^** not p^*) within the context of the method; What extra information would you need, and could credibly expect to have available? Etc. * Perhaps see (in a toy example only) what would happen with posterior estimation with the current method when you in fact have samples from p^** but you assume these are from p^* (i.e. something will go wrong), and how correcting for this might correct the analysis. This will go a long way to making this method more applicable in practice. Related Comment My understanding of (3) (p. 3 col 2) is that you're assuming that the sufficient statistics of x_s for model p, are the same as the sufficient statistics of x_0 for p^*. Given that the premise of the paper is that p is a mis-specified version of p^*, how likely is this assumption? I appreciate that the following paragraph after (3) discusses this to some extent. However, I'd be interested in understanding more what happens within the method if p is increasingly "too simple" so that (3) becomes increasingly violated. At what point does the method break down or possibly perform poorly? (Discussion response in the text is adequate.) Methods And Evaluation Criteria: Ok. Theoretical Claims: The contribution is essentially methodological, with no new theory. Experimental Designs Or Analyses: Reviewed for intuitive sense. Supplementary Material: With apologies, but I didn't have time to review this. Relation To Broader Scientific Literature: Good. Essential References Not Discussed: Good, to the extent of my knowledge. Other Strengths And Weaknesses: See above & below comments. Other Comments Or Suggestions: Small comments: * p.2 col 1. l.31 "which reduces uncertainty compared to the prior distribution" - perhaps nit-picking, but posteriors don't always have 'reduced uncertainty' (smaller spread) over the prior, especially in situations of prior mis-specification (for example). * p.2 col 1. l.33-34: "the ... simulator S that ... approximates p^*". - again, perhaps nit-picking, but simulators are forward-simulation schemes. There may be a density function associated with them, but this density function (that approximates p^*) is not the simulator itself. * p.2 col 2 text after eq (1): calling p(theta|x_s) the "true" posterior confused me for a while, as p^*(theta|x) is the true posterior, and p(theta|x) is just the model you're attempting to fit. Maybe adjust the text here? Trivial typos: * p. 1 col 1 l.-3 "absence of A paired datasetS of" clash between "a" and a plural. * p.2 col 1 l.25 integral in p^*(x_0) should be d\theta not dx_0 * p.2 col 1 l.38 p^*(theta,x_0) not defined (the reader has to infer this). * p.7 col 1 l.-12 "we see of J-NPE and" - delete "of" * p.7 col 2 l. 12 refers to Figure 3. The paper only appears to have 2 Figures. Questions For Authors: See above. Ethical Review Concerns: . Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for carefully reading our paper and appreciate the very constructive feedback regarding the plausibility of finding a calibration set sampled from $p^\star$ in practical settings. We now discuss this question and other comments in detail. ## Source distribution of the calibration set We agree with the reviewer that in some settings, assuming the calibration set consists of samples from the distribution $p^\star(\theta, x_o)$ may be unrealistic. Nevertheless, in important practical scenarios, it may sometimes be possible to influence the calibration set collection process to ensure it is representative of the test case scenario. In the following discussion, we denote an “imperfect” calibration set distribution as $p^c(\theta, x_o)$. ### Case 1: $p^c(\theta, x_o)$ and $p^*(\theta, x_o)$ have the same support In this case, as the size of the calibration set increases, the fine-tuning step should still be able to correct the estimated sufficient statistic for all observations in the test set. If the calibration set is large, one could even perform importance sampling to ensure that the fine-tuning step optimizes the expected loss on the ideal calibration set distribution. However, for smaller calibration sets, the sensitivity of RoPE depends on whether the fine-tuned neural statistic estimator (NSE) can generalize to unseen observations. This is highly dependent on the application, particularly regarding the non-linearity of the model misspecification and the inductive bias of the NSE. If the NSE struggles to generalize, we expect the entropy-regularized optimal transport (OT) step can bring some additional robustness. Specifically, observations where the fine-tuned NPE performs well will naturally be matched to the appropriate simulated statistics, leading to accurate posterior estimates. In contrast, for observations where the sufficient statistic does not generalize well, their representation in the embedding space may become unstructured. As a result, applying OT to these embeddings may lead to random matches with the simulated statistics, causing the posterior to revert to the prior distribution, filtered by the parameter space where the NPE is reliable. ### Case 2: $p^c(\theta, x_o)$ and $p^*(\theta, x_o)$ have disjoint support In this more extreme case, even an arbitrarily large calibration set may fail to provide RoPE with relevant training examples for test observations. Consequently, the fine-tuning step relies entirely on out-of-distribution generalization, which is highly problem-dependent. If the fine-tuned NPE cannot generalize at all to the test set, the fine-tuning process may be counterproductive. However, even in this situation, we expect the OT step to highlight this issue. Since real and synthetic observations will not match meaningfully, the transport matrix should approach a uniform and the posterior will revert to the prior. While this second scenario is particularly challenging to study, we propose to approximate "Case 1" in a controlled experiment on the Light Tunnel task. Specifically, we will construct a calibration set using samples from Prior B (as in Figure 2) and evaluate performance on a test set drawn from Prior A. If the paper is accepted, we plan to include this experiment in the appendix and add a discussion in Section 6 to highlight RoPE’s sensitivity to non-representative calibration sets, following the considerations outlined above. ## On the failure of RoPE when the simulator is increasingly simple We agree that further investigating the trade-offs between model complexity and robustness to misspecification is an important research direction but goes beyond the scope of our paper. In many applications, there exists a trade-off: - **Simplistic simulators** may ignore complex relationships between parameters and observations but have structural properties that generalize well to real-world data. SBI models trained on these simulators may be less sensitive - **Highly complex simulators** may capture intricate relationships but risk overfitting to simulation-specific artifacts, failing to generalize under slight misspecifications. For RoPE, because it assumes conditional independence $x_o \perp \theta \mid x_s$, it is better suited for handling the latter case rather than highly simplistic simulators that fail to encode meaningful parameter-observation relationships. The conditional independence assumption acts as a backstop to shortcut learning, but for the same reason, RoPE’s posterior informativeness is inherently limited by the informativeness of the posterior learned from simulations, as transparently discussed in Section 3, Paragraph 2. ## Minor comments & typos We appreciate the reviewer’s attention to detail in pointing out minor phrasing inconsistencies and imprecisions. We will carefully edit the manuscript to address these comments. --- Rebuttal Comment 1.1: Comment: Thank you for your considered response, and for the discussion and responses of the other reviewers -- all very informative. With apologies to the authors, but I am in two minds about how/whether to adjust my score. * If the authors can contribute the described new analysis/discussion on the source distribution of the calibration set (great!), then I will be happy to increase my score 3->4. * However the paper does not currently have this analysis, and we simply have a promise that this will be included. (No slight against the word of the authors is intended here.) So on what is currently available in the non-revised manuscript, my score should remain at 3. Clearly my complaint (which has nothing to do with the authors of this paper) is that a peer review process that requires reviewers to make accept/reject recommendations on paper acceptance based on *promises* to do things has always seemed rather suspect. Many (other) author teams spend the few days between the release of reviews and the due date of rebuttals in an extremely unhealthy crunch to perform additional requested analyses. While I am glad that the current authors prioritise their health, such authors are implicitly penalised compared to the crunch teams due to the uncertainty that they will deliver on their promises. Is this (both unhealthiness and implicit penalisation) really something that the ML community really wants to promote? Apologies again (particularly to the authors) for the rant. I will increase my score to 4 and trust the authors, as the only fair outcome for this nice paper. Obviously I need to go and lie down now. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for trusting our good faith and raising your score to 4. We're happy to share preliminary results from the experiments we committed to including in the final version of the paper. You can find them anonymized [here](https://anonymous.4open.science/r/RoPE_ICML_Rebuttal-4246/README.md). While these results currently lack error bars, they already support the discussion presented in our rebuttal and which we will include in Section 6. Specifically, they show a performance drop for RoPE but not a complete collapse as RoPE keeps improving upon the prior while being calibrated (at all calibration set size above 10). They also highlight that, in this situation, RoPE performs well on samples similar to those seen in the calibration set, but struggles with out-of-distribution samples. While this is only a partial eye on the issue you mentioned, whose impact will strongly depend on the exact setup considered, we believe these empirical results are consistent with the theoretical discussion we will add to section 6. We believe this additional experiment will provide valuable insights to readers, and we thank you again for raising this important point. With our best regards, The authors
Summary: This paper presents a method for improving simulation-based inference when the simulator is misspecified. It combines neural posterior estimation with optimal transport, using a potentially small labeled calibration real data paired to corresponding parameters to correct for misspecified simulator. It then fine-tunes a representation using the calibration data and uses OT to align the distributions of representations from simulated and real data. Claims And Evidence: The main claim from the paper is that the method RoPE can provide calibrated posterior distributions under simulator misspecification, even with a small calibration set. There are a few experiments with synthetic data ,and two real-world benchmark tasks (Figure 1), showing improved log-posterior probability and Average Coverage AUC compared to baselines and methods. Ablation studies demonstrating the importance of both the fine-tuning and OT steps. This is a well supported evidence to me. There is also a claim that RoPE is robust to prior misspecification. Experiments on extensions of tasks C and E (figure 2b, 2c and appendix C), where the true parameter distribution differs from the prior -limited to a single misspecification, showing that RoPE maintains performance. Somewhat the least convincing claim given the limited scope of the priors selections. Another claim is that RoPE offers a controllable balance between calibration and informativeness via hyperparams. Figure 2a shows an effect of gamma hyperparam on posterior sharpness and calibration. Sounds good to me. Finally, a claim that adding a calibration set in the way specified can improve on standard SBI. This is overall well discussed and non-ambiguous to me. Methods And Evaluation Criteria: The methods and evaluations are internally consistent and appropriate given the assumptions and limitations. The bigger question is whether those assumptions and limitations are too restrictive for broad real-world applicability. RoPE combines existing techniques (NPE, OT) in a novel way on a hard problem. The method is clearly described, with equations and algorithm. The core modeling assumption is explicitly stated (eqn 3), and the fine-tuning objective (eqn 6) is also well defined. The evaluations use a mix of synthetic and real-world benchmarks, simplified lab setups, and comparisons to many baselines and competing methods. In fact there is a wealth of evaluations, always within the scope. Metrics are appropriate for evaluating posterior calibration and informativeness. Ablation studies OK to me too. Theoretical Claims: The theoretical foundation, particularly its self-calibration property and the validity of its posterior factorization relies on the conditional independence assumption eqn (3). This assumption is acknowledged by the authors, and necessary but represents a significant limitation. It basically says that the simulator completely mediates the relationship between parameters and real-world observations. Thus the theoretical guarantees and practical performance can degrade when it breaks, and leads to wrong inferences. Experimental Designs Or Analyses: Synthetic data (tasks A, B, C, D): generally well-designed to test different aspects of misspecification. The use of established benchmarks (CS, SIR) are all good to me. Tasks E, F are beyond purely synthetic data but simple lab setups, not really real-world applications. Baselines are all well and confined within the definitions of the paper. Metrics: LPP and ACAUC are reasonable choices for evaluating posterior quality. The discussion of their limitations is helpful. I have nothing much to critique on the experiments. They are all well done within its limited scope anyway: the experimental designs are limited by the small size of the calibration set, small number of parameters, and the conditional independence assumption. Supplementary Material: Yes I did review all the supplementary material. This is very valuable and brings needed clarification of definition, model architectures, and metric computation, and experimental details. A comment is that all posterior corner plots are a bit messy. Relation To Broader Scientific Literature: The paper cites adequately SBI, OT and misspecification (in the Bayesian sense) literature. Essential References Not Discussed: My understanding of RoPE also connects to many other areas of machine learning which I feel the paper is missing: * semi-supervised learning: fundamentally RoPE can be framed as a semi-supervised method, but this is not explicitly discussed. * domain adaptation: RoPE can also be viewed as a DA method, with the simulator as the source domain and the real world as the target domain. There are a set of domain adaptation papers which attempt to bridge the gap between theory and data. sim2real (in robotics ) is also a very relevant field. Other Strengths And Weaknesses: I mentioned a few strengths and weaknesses, here are some others: **Strengths**: * one of the very few paper dealing with SBI misspecification which is one of the main SBI limitation * intuitive combination of NPE and OT for handling misspecification * well-written paper, clear problem formulation and motivation. * scoped experiments and evals * open about limitations (this is a hard problem) **Weaknesses**: (most already mentioned) * limited setting: requirement of a labeled calibration set and the focus on low-dimensional parameters (also a limitation of SBI though). Similarly this is not clear how it would scale to larger and more diverse datasets, especially if calibration set does not grow simultaneously * dependence on conditional independence: limiting. Other Comments Or Suggestions: * OTT is not defined (had to look the reference). I feel the authors could also propose a setting where RoPE could be useful: design an experiment with a small calibration set, and one may be good to go with SBI. This is perhaps not such a strong requirements in some experimental sciences, while almost impossible in most observational sciences. Questions For Authors: Not too many questions, beyond extending to other non-SBI simu-real gaps literature. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive feedback. We greatly appreciate the careful assessment of our work and the valuable insights provided. Below, we address the key points raised. ## Additional References We completely agree that RoPE is closely related to semi-supervised learning and Sim2Real literature. If accepted, we will use part of the additional page to elaborate on these connections. ## Setting Limitations As demonstrated in our experiments, in the absence of a calibration set, the version of RoPE without fine-tuning (OT-only) provides a viable way to handle model misspecification in some benchmarks. Nevertheless, as noted in the introduction, when posterior estimates need to be reliable, a labeled dataset is crucial for empirical validation and it appears thus natural to save a subset of this dataset for training purposes. We also recognize that settings where no labeled data is available exist. However, evaluating methods in such scenarios is highly application-dependent as it requires appropriate unsupervised metrics and we believe it may be difficult to create one generic method to handle these diverse set of problems. While we hope that RoPE’s fine-tuning step can be adapted to leverage unsupervised data and metrics specific to these applications, we acknowledge that this remains an open question. We believe future work focused on adapting RoPE to these real-world use cases could be highly impactful. Regarding scalability, as acknowledged in Section 5, we have not yet explored RoPE’s performance in medium- or high-dimensional parameter spaces, and its ability to correct for misspecification in those settings remains an open challenge. As the reviewer pointed out, model misspecification in SBI is a complex issue, and it is unlikely that any single method can address all possible cases. ## Conditional Independence We appreciate the reviewer’s careful consideration of RoPE’s assumptions. As noted in Section 3, the independence assumption is indeed a strong constraint. It limits RoPE’s ability to uncover dependencies between observations and parameters that are not explicitly captured by the simulator. If this assumption does not hold, RoPE cannot recover the true posterior, even with an arbitrarily large calibration set. However, RoPE will approximate the posterior by leveraging the simulator’s known dependencies, which may contribute to robustness and interpretability. Furthermore, please note that this conditional independence assumption does not hold for the two real-data testbeds (tasks E and F), but RoPE still returns informative posterior estimates. --- Rebuttal Comment 1.1: Comment: Thank you for your answers. My comments have been addressed. I am looking forward for the revised version.
null
null
null
null
null
null
Emergent Symbolic Mechanisms Support Abstract Reasoning in Large Language Models
Accept (poster)
Summary: This paper studies an important topic of whether symbolic representations and behaviors emerge in LLMs when performing abstract reasoning tasks. The paper argues that symbolic abstraction and operations appears at different levels of attention (low-level abstraction, mid-level prediction, high-level retrieval), and verified these hypothesis via causal mediation analysis, attention plots and ablation studies. All analysis was conducted on a simple reasoning tasks (generating strings following the pattern ABA or ABB from two-shot examples), and demonstrated that the attention analysis aligns well with the expected behavior under these controlled experiments. Claims And Evidence: In a narrow sense, the evidence provided by the paper, as given by the experimental section, supports well the paper's claim that LLMs demonstrated remarkable symbolic behavior for the "ABA, ABB" reasoning task. However, to claim that LLMs developed intrinsic symbolic mechanism for general abstract reasoning tasks (as the title and introduction implies), more evidences and analysis need to be provided beyond the simple "ABA, ABB" task. Methods And Evaluation Criteria: The methods (causal mediation analysis, attention ablation etc.) makes a lot of sense for the purpose of identifying symbolic behaviors, under the controlled setup of performing "ABA, ABB" tasks. Theoretical Claims: The paper could have provided a deeper theoretical foundation on the connection between connectionism and symbolism, as well as elaborated on why symbolism has been historically essential in the development of AI. Experimental Designs Or Analyses: The experiments are well-designed and analyzed from the perspective of causal intervention. Supplementary Material: Additional experimental details in the appendix. Relation To Broader Scientific Literature: This paper is well-contextualized connected to the established literature on the philosophical symbolism of artificial intelligence, as well as the historical development of neurosymbolic integration in AI model design. Essential References Not Discussed: The paper could have cited a highly-related recent paper: Emergent Symbol-like Number Variables in Artificial Neural Networks, Satchel Grant, Noah D. Goodman, James L. McClelland, arXiv 2025 Other Strengths And Weaknesses: Strength: The paper studies a really important and intriguing topic of the emergence and role of symbolism in neural networks. If it is successfully proven that symbolic and formal logic abstractions do exist in LLMs internal representations as an emergent phenomenon, this would significantly help researchers understand the inner-working of LLMs, and even how intelligence works in general. Weakness: The cases studied by the paper is purely synthetic and a little over-simplistic. They are not strong enough to support the general claim that symbolic abstraction emerges from LLM reasoning process. Other Comments Or Suggestions: In Sec 3.1, right above Eq. (1), it should be "that instantiated an BAA rule" instead of "...that instantiated an ABB rule"? Questions For Authors: 1. The specific examples ("ABA", "ABB") studied by the paper are highly structured and position-driven. While it is plausible that the model has developed an abstraction of the symbolic patterns from the in-context examples, these specific tasks are perhaps too simple for the paper to draw reliable conclusion that symbolic mechanisms are emergent from LLM reasoning process. To systematically study the symbolic behaviors of LLM, it would be more convincing to conduct experiments on general reasoning and logical inference tasks, for example maths and arithmetics, first-order logic etc. 2. The study of the paper is based on few-shot learning setup. It remains unclear whether the symbolic abstraction behavior still holds in broader LM use cases like instruction following. For example, instead of giving concrete examples for ABA or ABB, just instruct the model with "Generate 10 random triplets, each following the pattern ABB". Will we still be able to observe similar abstraction mechanism as the paper analyzed? 3, Given that modern LLMs have already demonstrated impressive capability in formal reasoning tasks, it is not hard to believe that LLMs already developed intrinsic symbolic representations and inference rules for these tasks. However, one missing piece is to understand how LLMs developed such a capability from training data. It would be great if the paper provides some analysis or just intuition on how the LLMs acquired such capability, and what types of training data is critically responsible for the symbolic emergence. 4. The paper argues that "we hypothesize that the value embeddings in these heads do not represent the identity of the input tokens, but instead represent only their position." (line 132). If this is the case, then the value embeddings is decoupled from semantic representation, and simply became a position indicator. Wouldn't it be easier to verify this hypothesis just by clustering value embeddings from different examples? If the embedding contains little semantic info, then these vector clusters should be clearly position-dependent. 5. It was not very clear how the retrieval heads work: If like what the paper argues (the retrieval head retrieves actual tokens closest to the latent symbolic variable), then shouldn't Fig. 4 (e) be a plot between variables (A, B) and the token vocabulary? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for the thoughtful and detailed feedback. We present detailed responses below to address each of the issues raised. Throughout these responses, we refer to new results that can be viewed here: https://anonymous.4open.science/r/RB-F30A/13386.pdf ## Additional models and tasks We have tested **12 additional models** (Figures 1, 6, 8-10), including GPT-2 (small, medium, large, and extra large), Gemma-2 (2B, 9B, and 27B), QWEN-2.5 (7B, 14B, 32B, and 72B), and Llama-3.1 8B (along with our original tests on Llama-3.1 70B), and **two additional tasks** (Figures 2-4) including a letter string analogy task and a verbal analogy task. With the exception of GPT-2, we find that our results are qualitatively replicated across all of these models and tasks. These results strongly suggest that the identified mechanisms are a ubiquitous feature of abstract reasoning in sufficiently large language models. ## Direct analysis of key, query, and value embeddings To gain a more precise understanding of the identified attention heads, we have now performed additional analyses applying RSA to the key, query, and value embeddings (Tables 1-2 and Figures 11-14). For abstraction heads, we found that queries primarily represented token identity, keys represented a mixture of both tokens and abstract variables, and values primarily represented the abstract variables. For symbolic induction heads, we found that queries and keys primarily represented the relative position within each in-context example, while values primarily represented abstract variables. For retrieval heads, we found that queries primarily represented abstract variables, keys represented a mixture of both tokens and variables, and values primarily represented the predicted token. These results further confirm the hypothesized mechanisms, namely that abstraction heads convert tokens to variables, symbolic induction heads make predictions over these variables, and retrieval heads convert symbols back to tokens. ## Clarification about retrieval heads The hypothesis about retrieval heads is not that they retrieve tokens that are most *similar* to the abstract variable. The hypothesis is that they retrieve the tokens that are *bound* to the abstract variable. For example, for an ABA rule, with the following incomplete query example: la (A), li (B), ? The final variable is A, and in this example A is bound to the token ‘la’. Retrieval heads take the predicted variable (A) as input, and retrieve the token that’s bound to it (‘la’) to predict the token that will come next. To make this clearer in the revised paper, we will clearly describe this in terms of variable-binding, rather than using the ambiguous phrase ‘associated with’. ## Intuitions on the origins of emergent symbolic mechanisms We agree that it is interesting to consider the factors leading to the emergence of symbolic mechanisms. In the discussion section, we have already included some discussion of aspects of the transformer architecture that may contribute (innate similarity mechanisms, indirection), but aspects of the training data may also contribute, including: 1) the inherently relational nature of language, 2) factors such as ‘burstiness’ that have been related to in-context learning [1], and 3) the massive scale and generality of LLM training data, which may force the development of more general-purpose mechanisms. Directly analyzing the relative contribution of these factors will necessitate training language models from scratch under various conditions, which would require extensive resources, but we consider this an important avenue for future work. We will add further discussion of these issues to the revised paper. [1] Chan, S., Santoro, A., Lampinen, A., Wang, J., Singh, A., Richemond, P., ... & Hill, F. (2022). Data distributional properties drive emergent in-context learning in transformers. Advances in neural information processing systems, 35, 18878-18891. ## Additional related work Thank you for bringing the paper from Grant et al. to our attention. We agree that it is highly relevant. Relative to our work, their study investigates smaller neural networks trained from scratch on specific tasks rather than large pre-trained models, and also investigates a different task setting, but the high-level emphasis on emergent symbolic representations is closely related to our work, and we will make sure to cite it in our revised paper. ## Clarification on description of 2 We will correct the description of equation 2 to describe it as a ‘BAA’ rule, while also clarifying that ABB and BAA rules are equivalent in this task (they involve the same pattern of relations).
Summary: This paper investigates the mechanisms behind how Large Language Models (LLMs) perform two simple abstract reasoning tasks related to algebraic identity rules (left and right). They identify three types of attention heads: abstraction heads, symbolic induction heads, and retrieval heads, which are implicated in performing the task. The attention patterns match the predicted values very closely, and show evidence that LLMs do indeed possess the capability to do symbolic processing to some extent. Claims And Evidence: While the setup and experiments are simple, they are thorough and well thought out. I found the claims well-supported by the evidence. Methods And Evaluation Criteria: There are no benchmarks used here, just a simple dataset. But the proof-of-concept of symbolic processing is sufficient for the goal of the paper. Theoretical Claims: There are no theoretical claims in the paper. Experimental Designs Or Analyses: The experiments are well-designed and nice counterfactuals are used to elicit different responses from the network to study information flow/processing. Supplementary Material: Yes, I read the entire appendix. Relation To Broader Scientific Literature: This work attempts to show that LLMs trained on natural text corpora are capable of developing symbolic processing mechanisms that reason over abstract concepts. I am not aware of previous work that has studied this specifically, though the original induction heads thread from Anthropic did suggest that induction heads may be able to process both literal and abstract relationships. It is nice to see this in a concrete, though simple form. I will also point out (though do not expect the authors to have been aware of/cited it due to its release date) a recent paper that attempts to clarify the difference between induction heads and function vector heads [1], showing that literal copying and few-shot ICL performance are mediated by different sets of heads. ___ [1] Yin, et al. Which Attention Heads Matter for In-Context Learning? 2025. (https://arxiv.org/abs/2502.14010) [2] Liu, et al. Transformers Learn Shortcuts to Automata. ICLR 2022. (https://openreview.net/forum?id=De4FYqjFueZ) [3] Song, et al. Out-of-distribution generalization via composition: a lens through induction heads in Transformers. (https://arxiv.org/pdf/2408.09503) Essential References Not Discussed: The literature awareness of this paper is great! Here are a couple works that might help contextualize the contributions of the paper such as [2] who study what algorithms transformers trained on symbolic data settings such as algebraic groups learn, or [3] which study induction heads for "symbolized" language reasoning, though I think their may still fit the standard token-induction and not symbolic induction studied here. However, I think the work already cited is pretty well-covered. Other Strengths And Weaknesses: Overall, I think this is a solid paper and gained some new insights while reading it. I think the experiments section is a little light and I'd like to see additional evidence or more concrete analysis on the connection of the three kinds of attention heads - whether they are directly connected or there are other mediators between them. - There are only two tasks for which symbolic induction is tested (ABA, and ABB). One thing that could strengthen the claims of this paper is to examine other setups where symbolic induction may be helpful/used. A simple extension that I imagine wouldn't change the results you find are things like ABCABC..., or ABBCABBC...? Other Comments Or Suggestions: - In Line 155-157, can you clarify why you say the final token is the final token in each in context example, is the same for both contexts? Did you mean that A_N is the target token for both examples? Typos: Line 320: patten -> pattern Questions For Authors: 1. Have you studied these classes of attention heads on a broader class of relations? 2. Are the function vector heads only symbolic induction heads or did they relate to the abstraction heads or retrieval heads you find? 3. Are the abstraction heads you identify a more general form of "previous token heads" identified in previous circuit work [4,5]? ___ [4] Elhage, et al. In-context Learning and Induction Heads. 2022. (https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html) [5] Wang, et al. Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small. (https://arxiv.org/abs/2211.00593) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for the thoughtful and detailed feedback. We present detailed responses below to address each of the issues raised. Throughout these responses, we refer to new results that can be viewed here: https://anonymous.4open.science/r/RB-F30A/13386.pdf ## Additional models and tasks We have tested **12 additional models** (Figures 1, 6, 8-10), including GPT-2 (small, medium, large, and extra large), Gemma-2 (2B, 9B, and 27B), QWEN-2.5 (7B, 14B, 32B, and 72B), and Llama-3.1 8B (along with our original tests on Llama-3.1 70B), and **two additional tasks** (Figures 2-4) including a letter string analogy task and a verbal analogy task. With the exception of GPT-2, we find that our results are qualitatively replicated across all of these models and tasks. These results strongly suggest that the identified mechanisms are a ubiquitous feature of abstract reasoning in sufficiently large language models. ## Direct analysis of key, query, and value embeddings To gain a more precise understanding of the identified attention heads, we have now performed additional analyses applying RSA to the key, query, and value embeddings (Tables 1-2 and Figures 11-14). For abstraction heads, we found that queries primarily represented token identity, keys represented a mixture of both tokens and abstract variables, and values primarily represented the abstract variables. For symbolic induction heads, we found that queries and keys primarily represented the relative position within each in-context example, while values primarily represented abstract variables. For retrieval heads, we found that queries primarily represented abstract variables, keys represented a mixture of both tokens and variables, and values primarily represented the predicted token. These results further confirm the hypothesized mechanisms, namely that abstraction heads convert tokens to variables, symbolic induction heads make predictions over these variables, and retrieval heads convert symbols back to tokens. ## Additional analyses on relationship to function vectors In the original submission, we showed that function vector scores and symbolic induction head scores are very highly correlated, indicating that symbolic induction heads are responsible for producing function vectors. We have now performed additional analyses that suggest a more complex interpretation (Table 4). Specifically, when function vector scores are computed based on the final position in the sequence, these scores are highly correlated with symbolic induction heads, but not with abstraction heads. In contrast, when function vector scores are computed based on the final item in each context-example, these scores are highly correlated with abstraction heads, but not symbolic induction heads. These results suggest that function vectors are first computed by abstraction heads at the level of individual in-context examples, and symbolic induction heads are primarily responsible for aggregating them across in-context examples. ## Related work on relationship between induction heads and function vectors Thank you for bringing the paper from Yin et al. to our attention. We agree it is highly relevant and will make sure to cite it in our revised paper. It is especially interesting that Yin et al. find evidence that function vector heads evolve from induction heads. Our results suggest an explanation for this finding: function vector heads (i.e. symbolic induction heads) can be viewed as performing induction over symbolic (rather than literal) inputs. ## Additional related work Thank you for pointing out the additional related work from Liu et al. and Song et al. We agree that these studies are also relevant, and will make sure to cite them in the revised paper. ## Relationship to previous-token heads It is unlikely that the abstraction heads are performing a function similar to previous-token heads. Previous-token heads copy the token from the position t-1 into the residual stream at position t. By contrast, abstraction heads are responsible for computing *which* token (t-1 or t-2) is related to the token at position t, and what that relationship is. That is, abstraction heads do not copy the token from positions t-1 or t-2. They compute whether the token at position t is related to the token at position t-1 or t-2, and what that relationship is. --- Rebuttal Comment 1.1: Comment: Thank you for the additional experiments for more models and tasks, as well as the clarification - they are helpful and have definitely strengthened the paper! I plan on retaining my score to accept.
Summary: The paper studies the internal mechanisms of a Llama3-70B on an in-context learning task. Specifically, they study an abstract reasoning task in which the model is given multiple demonstrations of the form ABA or ABB, where A and B correspond to randomly selected tokens, and on the final example the model has to predict either A or B. The authors identify a combination of multiple attention heads that are causally responsible for solving the task: (1) *Abstraction heads* extract relational information about the input tokens in each demonstration, (2) *Symbolic induction heads* perform induction over the relational information, (3) *Retrieval heads* predict the subsequent token by retrieving the value associated with the extracted relational variable. They causally verify their observations using activation patching experiments. Specifically, they verify the existence of token-independent, abstract variables containing relational information. Additionally, they analyse attention patterns and compare the similarity of attention head outputs to evaluate whether they match their expectation. Finally, they contrast induction heads as identified in Olsson et al. (2022) with the symbolic induction heads identified in their setting. Interestingly, they find that for symbolic attention heads the prefix matching score does not correlate with the causal mediation score, suggesting that these are different from induction heads. Instead, those heads appear to be involved in the creation of function vectors (Todd et al., 2023). Claims And Evidence: The experiments are generally supported by clear and convincing evidence. Methods And Evaluation Criteria: The evaluation criteria, e.g. the causal mediation score for the activation patching experiments, make sense for the problem at hand. Theoretical Claims: The paper does not make any theoretical claims. Experimental Designs Or Analyses: I did check the soundness and validity of the experimental design and analyses; most importantly, the activation patching experiments, representational similarity analyses as well as the comparison with induction heads and function vectors. Supplementary Material: The submission does not provide any supplementary material. Relation To Broader Scientific Literature: The paper fits well into the research direction aimed at (mechanistically) understanding reasoning in language models. In contrast to some prior work, this paper studies an in-context reasoning task in which function vectors (Todd et al., 2024) seem to emerge. While Todd et al. (2024) introduce the notion of abstract computations being represented in simple vectors in language models, they do not mechanistically study how any of those emerge. Thus, this paper is an in-depth study of how one specific function vector emerges in a large language model. Essential References Not Discussed: There have been a number of papers that mechanistically study some form of reasoning in language models. This includes works such as: - A. Al-Saeedi and A. Harma, ‘Emergence of symbolic abstraction heads for in-context learning in large language models’, in Proceedings of Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning @ COLING 2025, 2025, pp. 86–96. - J. Brinkmann, A. Sheshadri, V. Levoso, P. Swoboda, and C. Bartelt, ‘A Mechanistic Analysis of a Transformer Trained on a Symbolic Multi-Step Reasoning Task’, in Findings of the Association for Computational Linguistics: ACL 2024, 2024, pp. 4082–4102. - S. Dutta, J. Singh, S. Chakrabarti, and T. Chakraborty, ‘How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning’, Transactions on Machine Learning Research, 2024. - A. Saparov et al., ‘Transformers Struggle to Learn to Search’, in The Thirteenth International Conference on Learning Representations, 2025. Given the relevance of these works, it would be valuable to contextualize this study within their findings and highlight how it builds upon or differs from their approaches. Other Strengths And Weaknesses: Overall, I think this is solid work aiming at mechanistically understanding in-context reasoning in language models. It builds upon existing work in the field and studies reasoning in an original setting with interesting results and findings. Notably, the authors study this task in Llama3-70B, which presents additional challenges due to its size. Despite this, the paper remains clear and easy to follow. The main weaknesses are the limited engagement with prior work studying similar settings, and at times a lack of depth in the analyses. For example, it would have been interesting to study failure cases of the model. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for the thoughtful and detailed feedback. We present detailed responses below to address each of the issues raised. Throughout these responses, we refer to new results that can be viewed here: https://anonymous.4open.science/r/RB-F30A/13386.pdf ## Additional related work Thank you for bringing these additional studies to our attention. We agree that they are highly relevant, and will cite them in the revised paper. We include below a discussion of the relationship to our results: - Al-Saeedi and Harma identify a type of attention head that they refer to as ‘symbolic abstraction heads’ in smaller transformer models trained from scratch on identity rule tasks. Based on the attention patterns that they present, these heads seem to correspond to what we refer to as ‘symbolic induction heads’. Relative to that work, our study provides many complementary lines of evidence (RSA, attention analyses, ablations, causal mediation), identifies two additional types of heads, and demonstrates the involvement of these mechanisms in large pre-trained language models. However, the setting studied by Al-Saeedi and Harma provides a complementary line of evidence that may be especially useful in future work investigating the contribution of training regime and architecture (which requires training models from scratch). - The other studies investigate different types of reasoning, namely logical reasoning (Dutta et al.) and planning (Brinkmann et al. and Saparov et al.). One difference with our study is that language models have generally not been found to be very good at performing these types of reasoning reliably, whereas the analogical reasoning and rule induction setting that we investigate in our work is one where language models have shown stronger performance. This provides an opportunity to understand how language models are solving the types of problems that they seem to be able to solve in a relatively robust manner. We will add further discussion of these issues to the revised paper. ## Analysis of failure cases We agree that it is important to understand how the identified mechanisms contribute to the model’s failures as well as successes. To address this, we looked at how the RSA results differed between correct and error trials. We found that the outputs of abstraction heads and symbolic induction heads represented abstract variables more precisely (had a higher correlation with the abstract RSA matrix) in correct vs. error trials (Table 5). However, the effect size of this difference was very small for abstraction heads, suggesting that the effect was driven primarily by differences in the symbolic induction heads. One interpretation of these results is that abstraction heads correctly encode symbols on error trials, but symbolic induction heads do not successfully aggregate these results across in-context examples, perhaps due to interference from other heads. We will include these results in the revised paper. ## Additional results We have also included a number of additional results in the rebuttal file, including results for **12 additional models** (GPT-2 small, medium, large, and extra large; Gemma-2 2B, 9B, and 27B; QWEN-2.5 7B, 14B, 32B, and 72B; and Llama-3.1 8B), **2 additional tasks** (letter string analogies and verbal analogies), more detailed analyses of the identified attention heads (looking at the representations formed by keys, queries and values), and further tests of symbolic invariance.
Summary: This paper investigates the internal mechanisms that support abstract reasoning in LLMs, focusing on the open-source model Llama3-70B. The paper makes a contribution to the ongoing debate about the reasoning capabilities of LLMs by proposing a novel three-stage symbolic architecture and providing empirical evidence to support its existence. The authors test this hypothesis using a simple but paradigmatic algebraic rule induction task (ABA/ABB rules), where the model must predict the next token in a sequence based on in-context examples. Llama3-70B achieves 95% accuracy on this task, suggesting robust performance. Through a series of mechanistic interpretability techniques—including causal mediation analysis, attention pattern analysis, representational similarity analysis, and ablation studies—the authors identify and validate the roles of the proposed attention heads. Claims And Evidence: 1. The paper claims that the identified mechanisms perform symbolic processing, but the evidence is not entirely convincing. The representations produced by abstraction and symbolic induction heads are not perfectly abstract (they retain some token-specific information), suggesting that the mechanisms may not be fully symbolic. 2. The paper does not provide a clear definition of what constitutes "symbolic" processing in the context of neural networks, which weakens the claim. 3. The study is limited to a single task (ABA/ABB rule induction) and a single model (Llama3-70B). While the task is paradigmatic for studying relational abstraction, it is unclear whether the identified mechanisms generalize to more complex reasoning tasks or other models. 4. The paper does not explore whether these mechanisms emerge in smaller models or models with different architectures, which would strengthen the claim that they are a general feature of LLMs. 5 The claim that the findings resolve the symbolic vs. neural network debate is overstated. The paper does not provide sufficient evidence to conclude that the identified mechanisms are truly symbolic or that they generalize to other tasks and models. Methods And Evaluation Criteria: 1. The use of an algebraic rule induction task (ABA/ABB rules) is well-justified for studying abstract reasoning in LLMs. This task requires the model to identify and apply abstract rules (e.g., repetition or alternation) based on in-context examples, making it a paradigmatic case of relational abstraction. The use of arbitrary tokens ensures that the task cannot be solved by relying on statistical patterns, which is critical for isolating abstract reasoning capabilities. This method has merits: The task is simple yet effective for probing the model’s ability to perform abstract reasoning. It has been used in prior work to study systematic generalization in neural networks and symbol-processing in human cognition, providing a strong foundation for comparison. However, The task is relatively simple compared to more complex reasoning tasks, e.g., mathematical reasoning, planning, or analogical reasoning, which might limit the generalizability of the findings. 2. The methods and evaluation criteria used in the paper are generally appropriate for the problem and application at hand. Theoretical Claims: No theoretical claims proposed in this paper. Experimental Designs Or Analyses: The causal mediation analysis is a rigorous and appropriate method for identifying the causal role of specific components (e.g., attention heads) in the model’s behavior. By patching activations from one context to another, the authors isolate the contributions of different heads to the model’s predictions. However, the analysis is somewhat limited to the specific task and model studied, and it does not rule out alternative explanations for the observed behavior (e.g., other emergent mechanisms). The interpretation of the causal mediation scores depends on the assumptions of the analysis, which are not fully discussed in the paper. For example, it is unclear whether the patching procedure introduces any artifacts or biases. The experimental designs and analyses in the paper are generally sound and appropriate for the problem and application at hand. The algebraic rule induction task is well-suited for studying abstract reasoning, and the mechanistic interpretability techniques (causal mediation analysis, attention pattern analysis, representational similarity analysis, ablation studies) provide multiple lines of evidence to support the proposed architecture. Supplementary Material: I didn't check the supplementary materials carefully. Relation To Broader Scientific Literature: The contributions of this paper are related to 1. address the debate about the robustness and nature of these capabilities, with some studies questioning whether LLMs rely on structured reasoning or merely approximate it through statistical patterns; 2. mechanistic interpretability of Transformers Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths 1 The submission identifies and characterizes new types of attention heads (abstraction heads, symbolic induction heads, retrieval heads) that support abstract reasoning in LLMs. It also provides a detailed, mechanistic account of how these heads work together to perform a form of symbol processing. 2. It contributes to the symbolic vs. neural network debate by demonstrating how neural networks can develop symbol-like processing capabilities without explicit architectural biases. It uses a combination of causal mediation analysis, attention pattern analysis, representational similarity analysis, and ablation studies to validate the proposed mechanisms. 3. It builds on prior work in emergent reasoning, systematic generalization, and mechanistic interpretability, while providing new insights into the role of function vectors and induction heads. Weakness: 1. The study focuses on a single, relatively simple task (ABA/ABB rule induction) and a single model (Llama3-70B), raising questions about whether the findings generalize to more complex reasoning tasks or other models. 2. The paper claims that the identified mechanisms perform symbolic processing, but the evidence is not entirely conclusive. The representations produced by abstraction and symbolic induction heads are not perfectly abstract (they retain some token-specific information), which weakens the claim. 3 The paper does not thoroughly rule out alternative explanations for the observed behavior, such as statistical approximations. Other Comments Or Suggestions: 1. Could the authors clarify the definition of "symbolic"? The paper frequently uses the term "symbolic" to describe the identified mechanisms, but it is not always clear what this means in the context of neural networks. Providing a clearer definition (e.g., distinguishing between discrete, rule-based symbols and distributed, approximate symbols) would strengthen the paper’s claims and help readers understand the nature of the proposed mechanisms. 2. The paper could expand its discussion of the broader implications of the findings for AI and cognitive science. For example, how might these insights inform the design of more robust and interpretable AI systems? How do these mechanisms compare to human cognitive processes? 3. To strengthen the generalizability of the findings, consider testing the identified mechanisms on a wider range of tasks (e.g., mathematical reasoning, planning, or analogical reasoning) and models (e.g., smaller or larger LLMs, models with different architectures). Explore whether these mechanisms emerge under different training regimes or datasets, which would provide insights into their dependence on specific training conditions. 4. The paper does not thoroughly rule out alternative explanations for the observed behavior, such as statistical approximations. Including additional experiments or analyses to address these possibilities would make the findings more robust. 5. The paper can also include a more detailed discussion of its limitations, such as the simplicity of the task, the focus on a single model, and the potential dependence on specific architectural features of the transformer. Acknowledging these limitations would provide a more balanced perspective on the findings. Questions For Authors: See the above questions and comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for the thoughtful and detailed feedback. We present detailed responses below to address each of the issues raised. Throughout these responses, we refer to new results that can be viewed here: https://anonymous.4open.science/r/RB-F30A/13386.pdf ## Additional models and tasks We have tested **12 additional models** (Figures 1, 6, 8-10), including GPT-2 (small, medium, large, and extra large), Gemma-2 (2B, 9B, and 27B), QWEN-2.5 (7B, 14B, 32B, and 72B), and Llama-3.1 8B (along with our original tests on Llama-3.1 70B), and **two additional tasks** (Figures 2-4) including a letter string analogy task and a verbal analogy task. With the exception of GPT-2 (see ‘Testing smaller models’ below), we find that our results are qualitatively replicated across all of these models and tasks. These results strongly suggest that the identified mechanisms are a ubiquitous feature of abstract reasoning in sufficiently large language models. ## Defining ‘symbol processing’ We define ‘symbol processing’ in terms of two key properties: - Symbolic representations are *invariant* to the content of the values that they are bound to. That is, the representation of the abstract variable ‘A’ should be the same regardless of which values this variable is assigned to. Although the abstraction and symbolic induction head outputs preserve some information about specific tokens, we find that they still contain a subspace that represents abstract variables in an invariant manner (see ‘Additional evidence for invariant symbolic representations’ below). - Symbol processing mechanisms employ *indirection*, meaning that variables refer to content that is stored at a different location than the variables themselves (i.e., they are pointers). In the identified architecture, the retrieval heads use the inferred symbols to retrieve the associated tokens from earlier positions in the sequence. That is, the symbol representations function as pointers that identify the address of the to-be-retrieved tokens. We will add an explicit statement to the revised paper that clearly defines symbol processing in these terms, and add more discussion explicitly relating this definition to the results. ## Additional evidence for invariant symbolic representations The RSA results showed that the outputs of abstraction and symbolic induction heads preserve token identity to some extent, which may seem to suggest that they do not represent abstract variables in an invariant manner. However, it is possible for these heads to invariantly represent abstract variables within a specific subspace, while also representing token identity. To test for this possibility, we performed an experiment in which a linear decoder was trained to predict the abstract variable (A or B) based on the outputs of these heads, and tested on its ability to *generalize out-of-distribution to problems involving completely novel tokens*. The decoder achieved nearly perfect (>98%) accuracy for both types of heads (Table 3). These results demonstrate that a subspace exists in which abstract variables are represented in an invariant manner, despite the fact that concrete tokens are also represented in other regions of the embedding space (note also that tokens are represented much more weakly, as shown in Table 1). ## Testing smaller models Unlike the larger models that we tested, none of the GPT-2 variants were able to reliably perform the task (Figure 1), and they did not show robust evidence for the presence of abstraction heads (Figure 1 and 6). These results were also not consistent between the two rule types (ABA vs. ABB, see Figures 5 and 7), again suggesting a lack of robustness. These results suggest that symbolic mechanisms may only emerge at certain scales (whether in terms of model or training data size). These results also strengthen our argument that abstract reasoning in language models depends on the presence of emergent symbolic mechanisms – language models that do not develop these mechanisms are not able to reliably solve abstract reasoning tasks. ## Responses to other comments - Implications for designing more robust AI systems: One implication of these results is that the identified mechanisms could potentially be built directly into the architecture of language models. This has been explored to some extent in architectures such as the Abstractor (which implements a mechanism similar to abstraction heads), but could be taken further by incorporating the other mechanisms that we identify. - Impact of training regime and architecture: We agree that it would be very interesting to investigate how aspects of the training regime and model architecture contribute to the emergence of symbolic mechanisms. These experiments would necessitate very extensive resources, as they would involve training language models from scratch, but we consider this an important direction for future work. We will add further discussion of these issues to the revised paper.
null
null
null
null
null
null
ADHMR: Aligning Diffusion-based Human Mesh Recovery via Direct Preference Optimization
Accept (poster)
Summary: The authors propose a framework that integrates a diffusion-based human mesh recovery model with direct preference optimization. The core idea is to train HMR-Scorer, a model that evaluates the quality of human mesh predictions without requiring 3D annotations, and use it to create a preference dataset. This dataset consists of pairs of high- and low-quality predictions, which are then used to fine-tune the base model via preference optimization. Additionally, ADHMR leverages HMR-Scorer for data cleaning, which improves model accuracy. Experimental results show that ADHMR outperforms state-of-the-art probabilistic HMR methods. Claims And Evidence: Some claims in the paper are incorrect. For example, "This is primarily because diffusion-based HMR models cannot use traditional 3D joint reprojection loss." However, many existing works [A,B] successfully use reprojection loss to guide the denoising process. Additionally, depending on the camera model, there is a trade-off between 3D joint accuracy and 2D alignment [C]. What camera model does this method use? The proposed approach does not consider reprojection error—how can it ensure that the final results are consistent with the input image? [A] "Score-guided diffusion for 3d human recovery." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [B] "Closely interactive human reconstruction with proxemics and physics-guided adaption." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [C] "Tokenhmr: Advancing human mesh recovery with a tokenized pose representation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Methods And Evaluation Criteria: - This work builds on top of HypoNet. What are the differences between this method and HypoNet? What is UVD in Line 176? - This base model is trained on accurate indoor datasets and synthetic datasets. How can it ensure accurate scoring on in-the-wild images? Theoretical Claims: The improvement in training heavily depends on the performance of HMR-Scorer. However, this module is difficult to train on outdoor data, making its effectiveness in outdoor scenarios uncertain. Experimental Designs Or Analyses: - The experimental comparison is unfair. The proposed method uses different training data than the baselines. Moreover, it requires fine-tuning on the benchmark, which gives it an unfair advantage over other baselines. For example, HMR2.0 could also be fine-tuned on the benchmark using reprojection error. - Add more details, such as the specific formulas for PLCC and SRCC. - Which datasets were used for data cleaning? Supplementary Material: This paper does not provide any supplementary materials. Relation To Broader Scientific Literature: This work may be insightful for future psedo-label generation in human pose estimation. Essential References Not Discussed: These works use reprojection error to guide the diffusion, which can achieve model-image alignment. [A] "Score-guided diffusion for 3d human recovery." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [B] "Closely interactive human reconstruction with proxemics and physics-guided adaption." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: - A comparison with diffusion-based methods such as ScoreHMR and CloseInt needs to be included. - There is no analysis of whether this method generalizes well across different poses, occlusions, or camera angles. - The effectiveness of HMR-Scorer is a critical component of ADHMR, yet the experiments evaluating its reliability are somewhat limited. While Table 1 reports correlation metrics (PLCC and SRCC) between HMR-Scorer’s scores and standard reconstruction errors, a more detailed analysis of failure cases is absent. For example, does HMR-Scorer struggle with extreme poses, occlusions, or rare human-object interactions? Qualitative examples of cases where HMR-Scorer fails would strengthen the validation. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for recognizing our SOTA performance, and the potential for pseudo-label generation. We truly appreciate your constructive comments and address them below. ### **Q1. Clarification of claims** Thank you for pointing this out. We would like to clarify that: end-to-end diffusion models predicting from noise typically avoid reprojection loss, as early denoising steps yield unrealistic poses, making such loss ineffective. CloseInt also notes: *“Early iterations are meaningless for human motion.”* In contrast to ScoreHMR and CloseInt, which are two-stage (using external pose estimators before diffusion), our method is purely end-to-end, sampling from random noise. This avoids dependency on potentially biased external estimators and improves the robustness of in-the-wild (ITW) data. Visual comparison with ScoreHMR: https://anonymous.4open.science/r/ICML_2025_Rebuttal-E36D/mE1y_Q1.pdf We will revise the phrasing in the final version. **Consistency with the input image.** Our method directly learns 3D human pose distribution conditioned on images without needing an explicit camera model or reprojection loss in training. We follow ScoreHypo to predict UVD joints used for mesh reconstruction and scoring. At inference, we recover 3D joints from UVD using a full-perspective projection with estimated intrinsics. As TokenHMR notes, minimizing 2D error can harm 3D accuracy due to imprecise camera parameters, even with full-perspective cameras. Instead, we ensure consistency implicitly via HMR-Scorer, which learns alignment using pixel-aligned image features. On clean data, scoring by 3D error naturally favors well-aligned results; on ITW data, the scorer transfers 2D alignment knowledge. Additional ITW results are provided at: https://anonymous.4open.science/r/ICML_2025_Rebuttal-E36D/8ugF_Q1.pdf ### **Q2. Differences with HypoNet** We introduce a preference optimization framework to improve image consistency and ITW robustness. Rather than proposing a new diffusion model, our method focuses on finetuning existing diffusion-based HMR models like HypoNet. ### **Q3. How to ensure accurate in-the-wild scoring** To address limited and noisy outdoor data, we train HMR-Scorer on large-scale synthetic datasets (e.g., BEDLAM, GTA-Human) for effective knowledge transfer. As noted in BEDLAM, HMR models trained solely on synthetic data can achieve SOTA accuracy. Table 1 in the paper shows strong results on GTA-Human (with ITW scenes). HMR-Scorer visualization is at: https://anonymous.4open.science/r/ICML_2025_Rebuttal-E36D/mE1y_Q3.pdf ### **Q4. About experiment comparison** We updated Table 2 to indicate which methods are finetuned on the target benchmark: **Updated Table 2**: https://anonymous.4open.science/r/ICML_2025_Rebuttal-E36D/mE1y_Q4.png We added results showing: - Our ADHMR outperforms finetuned HMR 2.0a. - Prior works (e.g., HMDiff, Zolly) also finetune on 3DPW. To compare fairly, we introduce ADHMR (ITW)†, trained without Human3.6M, which still outperforms others. Table 3 and Table 4 in the paper show that finetuning on target or extra datasets offers only marginal gains, highlighting the effectiveness of our framework. ### **Q5. More details** UVD in L176 denotes 2D coordinates (u, v) in the image and their corresponding depth (d) of human keypoints, which aligns 3D joint with image space. We will include the formulas for PLCC and SRCC in the revision. The datasets used for data cleaning are listed in L401–403 (right column). ### **Q6. Comparison with ScoreHMR and CloseInt** - ScoreHMR is a two-stage method that refines pretrained pose estimates using a diffusion model, applying reprojection loss only during sampling while trained with standard diffusion loss. - CloseInt is also a two-stage method and focuses on multi-person interaction. It refines plausible initial poses with a proxemics and physics-guided diffusion model. - Our ADHMR is one-stage and learns to synthesize poses from random noise, guided solely by image cues and preference alignment, without handcrafted priors. The **Updated Table 2 (see Q4)** includes results of ScoreHMR and CloseInt, where our ADHMR outperforms both. ### **Q7. Method generalization analysis** We provide more ITW results at: https://anonymous.4open.science/r/ICML_2025_Rebuttal-E36D/8ugF_Q1.pdf ADHMR shows generalization under extreme poses (a–c), occlusions (a, b, d, e), and complex backgrounds (f). However, extreme camera angles remain challenging, likely due to limited training coverage, as shown here: https://anonymous.4open.science/r/ICML_2025_Rebuttal-E36D/mE1y_Q7.pdf ### **Q8. Qualitative examples and failure case of HMR-Scorer** Although HMR-Scorer is robust even for ITW scenes **(see Q3)**, it still fails in some occlusion cases: https://anonymous.4open.science/r/ICML_2025_Rebuttal-E36D/mE1y_Q8.pdf With limited visual cues, the scorer may prefer predictions aligned with the image but not the 3D GT. We plan to explore temporal scorers to address this.
Summary: The paper adapts the works of Diffusion-based DPO (DDPO) to HMR by proposing ADHMR. Specifically, the paper introduces an HMR-scorer model that generates a reward for image-mesh alignment. This module is given the local, sampled from the UV joint locations, and global image features and outputs a score. The network is then trained similarly to RankNet on synthesized noisy annotations from ground truth SMPL poses. The HMR-scorer is then used for preference dataset synthesis, HMR DDPO, and data cleaning. The paper highlights the positive impact of each of these applications, demonstrating the effectiveness of their scoring strategy. Specifically, the paper shows significant improvements on their base model, HypoNet, through preference fine-tuning and a stronger scoring model. The paper also shows that the data-cleaning strategy can improve the performance of SOTA HMR models. Claims And Evidence: The paper's claims are supported via experimental evidence. For instance, they show the superiority of ADHMR over previous works, the effectiveness of HMR-scorer in assessing mesh quality, and the impact of data cleaning on 3 state-of-the-art models. The paper makes some minor claims, such as improving in-the-wild robustness, without proper analysis. However, it is acceptable, given the similarity of such claims with the literature. Methods And Evaluation Criteria: The evaluation criteria (PVE, MPJPE, etc.) are the standard and widely used metrics for evaluating HMR methods. The evaluation (3DPW, Human3.6m) and training sets are standard research benchmarks. The paper has also included recent papers up to 2024, showing good awareness of the literature during its comparisons. Theoretical Claims: None provided. Only formulations of DDPO are used. Experimental Designs Or Analyses: All experiments, including HMR-scorer evaluation, ADHMR evaluation, ablations, data cleaning, and qualitative comparisons, are sound and well-justified. Supplementary Material: None provided. Relation To Broader Scientific Literature: Although DPO has been extensively researched for NLP/LLMs and even image/video diffusion models, I did not find its adaptation to HMR tasks. However, it has been proposed for human motion generation [1], which is an entirely different task with similar input/output parameters and metrics. Reward modeling and scoring multiple hypotheses have also been extensively researched, but the paper introduces several implementation changes over prior works, showing improved performance. Several papers also directly address data cleaning, but the paper does not mention them since they are not the paper's focus. References: [1] Pappa, Massimiliano, et al. "MoDiPO: text-to-motion alignment via AI-feedback-driven Direct Preference Optimization." arXiv preprint arXiv:2405.03803 (2024). Essential References Not Discussed: The references are adequate. Other Strengths And Weaknesses: ### Strength 1. While the individual components (diffusion model, DPO, reward modeling) are not entirely new, the paper successfully combines them to improve prior works. 2. The paper is well-written. It has clear goals and designs adequate experiments to showcase its results. 3. The paper is well aware of recent literature and is one of the first works adapting DDPO into HMR tasks. ### Weaknesses 1. The paper makes incremental contributions over the base model by adapting DDPO for HMR. Since the usefulness of DDPO was already well-known, it reduced the impact of the paper's findings. 2. The choice of baselines is limited to only HypoNet. It would've been much more impactful if HMR-Scorer showed improved performance on other hypothesis-generation approaches. Other Comments Or Suggestions: The paper's contributions are incremental since it applies well-established methods to HMR without many changes. However, I found no significant issues in the paper, and the experiments were adequate. My final decision is borderline, leaning on acceptance due to good experiment design. Typo: - Line 87, Right Column: Duplicate citations for (Rafailov et al., 2024) (Rafailov et al., 2024) Questions For Authors: 1. How did you prepare the data for training the HMR-Scorer? Did you introduce rotational noise to the ground truth? Could you elaborate on the process? 2. Could you analyze the biases caused by DPO on the diffusion model? Does it improve rare/uncommon poses like lying or rolling on the floor? Could you provide some qualitative comparisons of generalizability to unseen data? 3. I understand that it is not common in this line of research to provide runtime information. However, could you give the model's general training/inference time and memory cost? 4. Please also address my concerns about the weaknesses mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for recognizing the effectiveness of our scoring strategy, significant improvements over the base model, and good experiment design. We deeply appreciate your valuable comments and address them below. ### **Q1. Generalizability to unseen data & Bias caused by DPO** To further support our claim on in-the-wild robustness, we provide more ITW qualitative results here: https://postimg.cc/jwY2qCgT These examples demonstrate improved performance under extreme poses, occlusions, and complex backgrounds—for example, (a–b) involve lying or rolling on the floor, where our method shows better 2D alignment. In the paper, Figure 2 (g–h) shows ITW images without ground truth, allowing only qualitative comparison. Results on 3DPW also partially reflect ITW capability with outdoor, unconstrained scenes. **Bias brought by DPO.** Our framework improves generalization by shifting the objective from fitting noisy in-the-wild pseudo labels to learning from relative preferences, encouraging the model to capture perceptually aligned and semantically meaningful poses (see L55–59). Besides, HMR-Scorer implicitly learns 2D image alignment by integrating global and pixel-aligned features. This leads to better robustness and transferability in challenging real-world scenarios. ### **Q2. More relevant references** Thank you for the helpful comment. While our work is the first to apply DPO to HMR, we acknowledge related efforts in motion generation and reward modeling. We also agree that data cleaning has been explored in prior works. For example, SPIN [A] filters training data based on reprojection error, and Unite the People [B] relies on manual selection. We will include these references in the revision. [A] Kolotouros et al., Learning to reconstruct 3D human pose and shape via model-fitting in the loop, ICCV 2019. [B] Lassner et al., Unite the people: Closing the loop between 3D and 2D human representations, CVPR 2017. ### **Q3. About contribution** Thank you for your thoughtful assessment and for acknowledging the strength of our experimental design. While DDPO is well-established in tasks like image generation, its application to HMR is non-trivial. ADHMR introduces key domain-specific innovations: - a learned HMR-Scorer capturing perceptual image–mesh alignment, and - a preference dataset constructed without human labels. ADHMR is the first to successfully apply DPO to HMR, enabling training from relative preferences rather than noisy ITW pseudo labels. In contrast, conventional methods often overfit these labels, degrading 3D accuracy—as also noted in TokenHMR [C]. Beyond performance gains, HMR-Scorer supports automatic data cleaning, making it broadly helpful in enhancing other HMR pipelines (see Sec. 4.5). [C] Dwivedi, Sai Kumar, et al. "Tokenhmr: Advancing human mesh recovery with a tokenized pose representation." CVPR. 2024. ### **Q4. Generalization to other hypothesis-generation approaches** Thank you for the valuable suggestion. We chose HypoNet as our main baseline as it is the latest SOTA diffusion-based HMR model for general-purpose scenarios. Our framework is compatible with other diffusion-based methods [D, E], and we plan to extend experiments in future work. Deterministic models (e.g., HMR 2.0) can also adopt our preference framework by discretizing the pose space (e.g., via VQ-VAE in TokenHMR [C]) and replacing the regression network with a classification network for DPO training. Meanwhile, HMR-Scorer and data cleaning are model-agnostic and can enhance training quality across both probabilistic and deterministic pipelines—offering a practical path for pseudo-label refinement in in-the-wild settings where 3D ground truth is scarce. [D] Cho, Hanbyel, and Junmo Kim. "Generative approach for probabilistic human mesh recovery using diffusion models." *ICCV*. 2023. [E] Stathopoulos, Anastasis, Ligong Han, and Dimitris Metaxas. "Score-guided diffusion for 3d human recovery." *CVPR*. 2024. ### **Q5. Duplicate citations** We will fix this in the revision. ### **Q6. HMR-Scorer training data preparation** Yes, we add joint-wise Gaussian noise to the ground truth SMPL pose to simulate rotational errors, with magnitudes empirically determined. We will include the exact noise parameters in the final appendix. ### **Q7. Training/inference cost** We provide the training/inference time and memory cost below: - **Inference** | Model| Time per prediction| Batch Size| Memory| |-|-|-|-| | HMR-Scorer| 56 ms| 32| 3350 MiB| | ADHMR ($M=10$) | 3 ms| 160| 8400 MiB| | ADHMR ($M=100$) | 1.6 ms| 80| 24 GB| | ADHMR ($M=200$) | 1.5 ms| 40| 24 GB| All inference was conducted on a single NVIDIA V100 GPU. $M$ denotes the number of predictions. - **Training** | Model| Time| Batch Size | Memory per GPU| |-|-|-|-| | HMR-Scorer | 24 h| 32| 11500 MiB| | ADHMR| 7 h| 40| 18 GB| All training was conducted on 4 NVIDIA V100 GPUs. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing a comprehensive rebuttal addressing all the points raised in my initial review. My questions regarding HMR-scorer training, DPO bias, generalizability, and computational cost have been satisfactorily answered. The additional qualitative results are also compelling, demonstrating impressive results on irregular postures and hard examples. Furthermore, the author's arguments about the non-trivial aspects and domain-specific innovations involved in applying DDPO to HMR clarified my concerns about the contributions of this work. As a result of this discussion phase and reading other concerns raised by the reviewers, my reservations have been fully addressed. I would like to increase my score to Accept, given that the details from the rebuttal are added to the final paper. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for acknowledging our efforts in addressing the concerns. We are particularly grateful for the recognition of the non-trivial and domain-specific aspects of applying DDPO to HMR, as well as the appreciation of our qualitative results and clarifications regarding scorer training, generalizability, DPO bias, and computational cost. We will make sure to incorporate these clarifications and additional insights presented in the rebuttal into the final version of the paper. We truly appreciate your updated assessment and recommendation to accept the paper.
Summary: This paper proposes the first method to use preference optimziation to improve the Human Mesh Recovery (HMR) models. The paper first introduces a HMR-scorer model to rank the human mesh result produced by (an arbitrary) HMR method. Experiments show the score is strongly correlated to the reconstruction metrics, supporting that this scorer model can be reliably applied to other datasets without 3D ground truths. This scorer is then used to construct preference dataset of {Image, winner result, loser result} triplets. These data are used to fine-tune a base HMR model using the direct preference optimization (DPO) loss. Result: a big leading margin in the HMR accuracy on common benchmarks. Comprehensive experiments validated the different stages in the paper. Overall, I think this is a solid paper. Update after rebuttal: after reading other reviewers' comments and the authors' rebuttal, my rating of acceptance remains unchanged. Claims And Evidence: Claim 1: the proposed method is the first framework that enhances HMR mdoels using preference optimziation. To my knowledge, this is true. Claim 2: the HMR-scorer model is effective. This is validated by experiments Sec. 5.2. Claim 3: the HMR-scorer improves the SOTA HMR models by filtering out low-quality training data. This is also validated through comprehensive experiments in Sec. 5.3, in particular Table 5. To summarize, the major novelty and technical claims are valid and verified. Methods And Evaluation Criteria: The proposed method makes of sense to me, and matches well with the evaluation. First, for the HMR-scoerer, the motivation is to replace human preference rating in the dataset curation step for the preference optimization methods. If the dataset has precise ground 3D ground truth, the direct computation of the errors are used for the ranking. If the data gt annotations are noisy (e.g. in-the-wild datasets), the scorer function can help effiectively create positive and negative samples. Consequently, the evaluation just need to verify the scores are correctly correlated to the common HMR error metrics (such as per-vertex errors, body joint errors etc.) -- that's how Section 5.1 and 5.2 are designed. Therefore I think this validation is a good design. Second, using the constructed preference dataset for DPO of the diffusion-based HMR method -- sounds like a straightforward thinking in hindsight -- is a great idea and this paper is the first to conduct so. The method creates impressive performance improvement on the standard benchmarks (Table 2), which I think sufficiently validates the efficacy of the design. Theoretical Claims: To my understanding, this paper is more about applying the DPO method (Wallace et al., 2024) to the specific problem of improving diffusion-based probabilistic human mesh recovery. Therefore the major theoretical claim (while indirectly made), is that DPO is effective for this setting. The experiments have validated this claim. Experimental Designs Or Analyses: As discussed above, the experiment are designed to validate the major technical claims. They have achieve this goal. 1. The first part of the experiment is to validate if the HMR scorer model can align with the common error metrics in HMR. This is validated on the datasets with precise ground truth. Metrics here are the correlation coefficients which I believe are sensibly chosen. 2. The second part is about whether the preference-rated data can improve common HMR models. This is validated on a detailed benchmarking and ablation study (table 3). 3. The third part of the experiment is to use the HMR-scorer model to clean up the data. The effectiveness is demonstrated on 3 different base models. Therefore, I think the experimental designs are comprehensive and effective. Supplementary Material: I did not find supplementary material for this submission. Relation To Broader Scientific Literature: The models and datasets delivered by this work can be applied to more precise human pose/motion estimation from images or video streams. This can facilitate many applications in other areas, such as motion caption in the movie industry, pose estimation in live sports, sport medicine and analysis, etc. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths: - Overall, I think this paper proposes a good idea that is validated through extensive experiments. The scorer network should be a drop-in enhancement for diffusion-based probabilistic HMR models. The method has improved the performance on common HMR benchmarks by a large margin -- such big improvement is not often seen in recent 2 years. The paper is well written too. Therefore I recommend acceptance. Weakness: L. 254, "through some mathematical techniques" -- it would be good to refer the reader to the corresponding location in the original literature. Other Comments Or Suggestions: - I particularly appreciate the detailed captions of the tables and figures. - Minor: DPO paper is cited twice on L. 87, right column. Questions For Authors: Is the Preference dataset also applicable to non-probabilistic HMR models to boost their performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for highlighting the novelty of applying preference optimization to HMR, the effectiveness of our framework, and the comprehensive experiments. We truly appreciate your encouraging feedback and respond to your points below. ### **Q1. Equation 5 elucidation** Thank you for your helpful comment. Our formulation follows the approach introduced in Diffusion-DPO [A], which provides a detailed derivation (see Section 4 and Supp. S2 of that paper). To improve readability, we will revise our manuscript to explicitly cite this reference at Line 254 and include a pointer to the key steps in the original paper. [A] Wallace, Bram, et al. "Diffusion model alignment using direct preference optimization." CVPR. 2024. ### **Q2. Preference dataset application** Thank you for the great question. The preference dataset is not directly applicable to standard non-probabilistic HMR models, as the DPO framework assumes the model produces discrete predictions with associated probabilities. Our current implementation is based on a diffusion-based HMR model, which naturally supports such probabilistic outputs, consistent with the assumptions in Diffusion-DPO. However, we believe adapting deterministic models to fit within this preference framework is both feasible and promising. As demonstrated in TokenHMR [B], one can first train a VQ-VAE to discretize the continuous pose space, and then modify the original regression head (e.g., in HMR2.0) into a classification head over quantized tokens. This adaptation would allow deterministic HMR models (e.g., HMR 2.0) to benefit from our preference datasets and preference optimization framework, unlocking similar gains in robustness and image alignment. We consider this a promising direction for future research. Importantly, the HMR-Scorer and our data cleaning pipeline are fully model-agnostic, and thus directly applicable to both probabilistic and deterministic models. They provide a scalable way to improve training data quality by filtering out noisy pseudo-labels—particularly useful for in-the-wild datasets where 3D ground truth is often unavailable or unreliable. We view this as a practical step toward broader adoption of preference-guided learning in HMR, and we plan to explore these extensions in future work. [B] Dwivedi, Sai Kumar, et al. "Tokenhmr: Advancing human mesh recovery with a tokenized pose representation." CVPR. 2024. ### **Q3. Typos** Thank you for pointing out the duplicated citation. We will fix this in the final version.
Summary: This paper targets improving HMR methods with preference prediction. Therefore, the authors present a prediction assessment model named HMR-Scorer. Further, the authors create a preference dataset using HMR-scorer, which is used to finetune base model and existing HMR methods. The full method, called ADHMR, shows sota results. Claims And Evidence: I think the claims are clear. Methods And Evaluation Criteria: The proposed method ADHMR has been tested on several standard datasets such as 3DPW and Human3.6M using standard metrics MPJPE, etc. Theoretical Claims: At L.254, the authors said "through some mathematical techniques ..." and yield Eqn.(5). It would be better to elucidate the procedure in appendix or guide the readers where could find similar provements. Experimental Designs Or Analyses: I have some questions about the experiments. 1. Why methods in Table.5 do not appear in Table.2? Any special reasons? 2. In Table 1, does ScoreNet share the same training set with HMR-Scorer? How to ensure this is a fair comparison? 3. Table.2 is a bit unfair because ADHMR is informed of more training data through HMR-Scorer which is trained on DNA-rendering etc. I do not doubt about the effectiveness of ADHMR, Table.4 has proven that. Additional descriptions about seen datasets in Table.2 would make it more rigorous. Supplementary Material: NO supp. Relation To Broader Scientific Literature: This paper is based on CVPR 2024 paper HypoNet with significant differences. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strength: 1. The paper is well written. The contributions are stated clearly. Incorporating DPO with HMR is an effective idea. 2. Experiments have shown the improvements brought by ADHMR. Weakness: 1. In Figure 2, it would be more informative to show GT and side view as well. 2. The first visual result in Figure 2 is not very accurate. In this image, the man is playing the baseketball with his left hand. The proposed ADHMR predicts that the right hand is also on the ball. However, the shadow on the ground indicates that the right hand is far away from the ball. In fact, this image is the 78th frame of the seq "courtyard_basketball_00". From the whole sequence, we would find that the right hand of the man is placed on the right of his right leg, and is totally occluded. Therefore this sample is not that good. Other Comments Or Suggestions: L.087. (Rafailov et al. 2024) duplicated. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for recognizing the strong performance of our method and the clear presentation of our paper. We deeply appreciate your constructive comments and address them below. ### **Q1. Equation 5 elucidation** Thank you for your helpful comment. Our formulation follows the approach introduced in Diffusion-DPO [A], which provides a detailed derivation (see Section 4 and Supp. S2 of that paper). To improve readability, we will revise our manuscript to explicitly cite this reference at Line 254 and include a pointer to the key steps in the original paper. [A] Wallace, Bram, et al. "Diffusion model alignment using direct preference optimization." CVPR. 2024. ### **Q2. Results in Table 5 not in Table 2** We separate Table 2 and Table 5 as they serve different goals. Table 2 evaluates the ADHMR framework with SMPL-based methods for fair comparison, while Table 5 focuses on data cleaning with more challenging SMPL-X data. Methods in Table 5 are deterministic and not compatible with ADHMR. We will clarify this in the final version. ### **Q3. About ScoreNet training set** Thank you for the insightful comment. While we understand the concern, it’s important to note that ScoreNet and our HMR-Scorer were originally trained on different datasets. Specifically, ScoreNet was trained on H3.6M, 3DPW, MPI-INF-3DHP, MPII, COCO, and UP-3D, while our HMR-Scorer was trained on a set of synthetic and human-interaction datasets with higher annotation accuracy and scene diversity, as detailed in Sec. 5.1. To ensure a fair comparison, we re-trained ScoreNet using our HMR-Scorer training sets. Please check the results in the updated Table 1: | Method | PVE ↑ PLCC | PVE ↑ SRCC | MPJPE ↑ PLCC | MPJPE ↑ SRCC | PA-MPJPE ↑ PLCC | PA-MPJPE ↑ SRCC | PVE ↑ PLCC | PVE ↑ SRCC | MPJPE ↑ PLCC | MPJPE ↑ SRCC | PA-MPJPE ↑ PLCC | PA-MPJPE ↑ SRCC | |---------------------|------------|-------------|---------------|----------------|------------------|-------------------|------------|-------------|----------------|-----------------|------------------|-------------------| | | GTA-Human| | | | | | DNA-Render| | ScoreNet | 0.52 | 0.49 | 0.52 | 0.50 | 0.47 | 0.43 | 0.55 | 0.51 | 0.55 | 0.50 | 0.50 | 0.46 | | **ScoreNet$\dagger$** | **0.55** | **0.51** | **0.54** | **0.51** | **0.50** | **0.46** | **0.57** | **0.54** | **0.56** | **0.54** | **0.51** | **0.48** | | HMR-Scorer (Ours) | **0.63** | **0.62** | **0.63** | **0.62** | **0.57** | **0.54** | **0.67** | **0.64** | **0.66** | **0.65** | **0.62** | **0.59** | **ScoreNet$\dagger$** is the ScoreNet that is trained on the same train sets as HMR-Scorer. Results show that our HMR-Scorer still outperforms ScoreNet$\dagger$ using the same set of train sets, validating the design of our scoring strategy rather than dataset bias. We will clarify this point and explicitly list the datasets used in Table 1 to improve rigor and transparency. ### **Q4. Seen datasets of methods in Table 2** Thank you for your valuable comment. We will include a clearer description of the datasets used in Table 2 in the revised version. Specifically: 1. ADHMR and HypoNet are trained on the same set of datasets, and ADHMR does not use HMR-Scorer. This comparison demonstrates the effectiveness of our framework even without additional supervision. 2. ADHMR (ITW) is not directly trained on the datasets used for training HMR-Scorer (e.g., DNA-rendering). However, it may indirectly benefit from these extra datasets. As you rightly pointed out, Table 4 shows that our method is significantly more effective than directly fine-tuning with those datasets, which further highlights the strength of our approach. ### **Q5. About Figure 2** Thank you for your careful analysis and helpful suggestions. We have updated Figure 2 by adding ground truth and side-view visualizations for better interpretability. In addition, we have replaced the first example with a clearer case. The revised figure is available here: https://postimg.cc/gx0YG3G8 In the original first example, the man’s right hand is fully occluded, and our model still predicts a plausible pose based solely on visible cues. In contrast, HypoNet’s result is not consistent with the observed image evidence. It is worth noting that our model processes single-frame images only and does not leverage any temporal context from the sequence. We appreciate your detailed observation and have reflected this clarification in the updated figure.
null
null
null
null
null
null
Covered Forest: Fine-grained generalization analysis of graph neural networks
Accept (spotlight poster)
Summary: This paper presents a study on the generalization abilities of sum-aggregation message passing graph neural networks (GNNs) based on a covering number approach. Towards this goal, they employ the so-called forest distance pseudometric, which is an intuitive re-formulation of the tree mover’s distance by Chuang and Jegelka. In particular, they exploit the Lipschitzness of GNNs wrt this pseudometric, and use the results from Xu and Mannor to obtain generalization bounds. Moreover, the paper deals with GNNs with mean (rather than sum) aggregation by defining a slightly modified metric. Claims And Evidence: All claims are supported by formal proofs. Moreover, the theoretical results are supported by experimental evidence. Methods And Evaluation Criteria: The proposed theoretical approach of combining a metric for graphs such that GNNs are Lipschitz (or equicontinuous) to it with the robustness framework of Xu and Mannor indeed makes sense for deriving data-dependent generalization bounds for GNNs. The datasets chosen for the experimental evaluation are few, but for a theory-driven paper this is not a problem. Theoretical Claims: I checked the proofs for most of the claims in the main paper, albeit not in depth, and as far as I can tell they seem correct. Experimental Designs Or Analyses: The datasets chosen for the experimental evaluation are few, but for a theory-driven paper this is not a problem. The experimental setup described in section Q is sensible, and the code is available. Supplementary Material: I checked the provided code, but I did nor run it. Relation To Broader Scientific Literature: The paper addresses a gap in the literature, namely obtaining tight data-dependent generalization bounds for GNNs. The results are non-trivial, but somewhat incremental on the papers it is based on. In particular, the tree distance is obtained from Böker 2021 and the forest distance is a re-formulation of the tree over distance by Chuang and Jegelka 2022. Moreover, the generalization bounds are obtained from the pre-existing work of Xu and Mannor. Nonetheless, I think this is an important advancement in the learning theory of GNNs. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - the paper seems to be technically sound, far as I can tell the proofs seem correct. - the paper addresses a very relevant problem, obtaining tight generalization bounds for GNNs. Weaknesses: - as already mentioned, the paper is somewhat incremental on the papers it is based on. - the paper is very heavy and difficult to understand, and at times it lacks clarity in what it wants to achieve and how to prove it. It might be beneficial to add some more intuitive discussion between the various propositions. For example: - an intuition of the behavior of $\bar\gamma^{\leftarrow}$ would help understand the results of Proposition 9 and its consequences - an explanation on the choice of using the Tree distance rather than the Forest distance (see also questions) - an intuition on the construction of Theorem 10 (again, see questions) - the fact that an estimated Lipschitz constant (rather than an upper bound for the entire GNN class) is used in the results of Table 1 should be stated clearly, and not hidden in Section Q. Other Comments Or Suggestions: Minor comments: * In eq. 7, A(G) is never defined (only in the Appendix). Please double check that everything is defined properly. * in lemma 2 the comma in the "only if" is misplaced * the footnote 7 on the "technical reasons" should really be part of the Proposition. Questions For Authors: - Why do you use the Tree distance for unlabeled graphs and the Forest distance for labeled graphs? Could one not use always the Forest distance? This distinction should be clarified. - I don't understand the relevance of the subsection on Otter trees, as this is an artificial setting with little practical relevance. Could you motivate your choice of inserting this in the main paper? I think that, e.g., the construction of Theorem 10 that allows you to obtain a cover of size m/(k+1) is much more interesting. ---- #### Update After the rebuttal, my recommendation remains mostly positive. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We thank the reviewer for their fair and constructive review.** > the paper is very heavy and difficult to understand, and at times it lacks clarity in what it wants to achieve and how to prove it. It might be beneficial to add some more intuitive discussion between the various propositions. For example: an intuition of the behavior of \widebar{\gamma}^{\leftarrow} would help understand the results of Proposition 9 and its consequences an explanation on the choice of using the Tree distance rather than the Forest distance (see also questions) We agree that the paper is relatively dense. We will use the additional page in the camera-ready version of the paper to add more intuition and guide the reader. For example, we will add intuitive explanations before a theorem/proposition. > the fact that an estimated Lipschitz constant (rather than an upper bound for the entire GNN class) is used in the results of Table 1 should be stated clearly, and not hidden in Section Q. We will explicitly mention that we do not compute the uniform Lipschitz constant for our experiments but instead use an empirical Lipschitz constant. >Minor comments: >In eq. 7, A(G) is never defined (only in the Appendix). Please double check that everything is defined properly. >in lemma 2 the comma in the "only if" is misplaced >the footnote 7 on the "technical reasons" should really be part of the Proposition. Thank you for your comments. We will consider all minor comments. > Why do you use the Tree distance for unlabeled graphs and the Forest distance for labeled graphs? Could one not use always the Forest distance? This distinction should be clarified. The decision to use both the tree distance and forest distance is motivated by: (i) Computational complexity: The tree distance with the L2-norm can be computed in polynomial time (see our response to Reviewer e2Pe), and we will move this explanation to the main text. (ii) The tree distance allows us to derive generalization bounds for graph classes without requiring bounded degree assumptions on graphs, which are necessary for the finiteness of the covering number using the forest distance. > I don't understand the relevance of the subsection on Otter trees, as this is an artificial setting with little practical relevance. Could you motivate your choice of inserting this in the main paper? I think that, e.g., the construction of Theorem 10 that allows you to obtain a cover of size m/(k+1) is much more interesting. The motivation was to show that in this specific graph class, the covering number admits an upper bound that decreases exponentially with the radius. This shows that our framework yields significantly tighter generalization bounds to structurally more simple graphs. **Please consider updating your score if you are satisfied with our answer. We are happy to answer any remaining questions.** --- Rebuttal Comment 1.1: Comment: Thanks for the reply. I'm confident that by addressing my and the other reviewer's comments, the paper will improve even further. Because of this, my recommendation remains positive after the rebuttal. However, I'm not fully convinced by some answers. I still find the result on Otter trees too artificial. Moreover, I understand the advantages of the tree distance, but not why you applied this only to unlabeled graphs. Couldn't you use it also on labeled graphs and obtain results for both distances on both classes of graphs? Because of this, but mostly due to the general lack of clarity in the paper, and since at the moment I cannot assess how this will be addressed, I don't feel comfortable raising the score beyond a weak accept. --- Reply to Comment 1.1.1: Comment: Thanks for your response regarding the extension of the tree distance for labeled graphs. We agree that a uniform treatment would be preferred; however, proving uniform continuity of tree distance to labeled graphs is highly nontrivial and open. We are actively working on this problem. We will make this clearer in the revised version of the paper to streamline presentation.
Summary: This paper presents a new framework for analyzing generalization properties of Message-Passing Neural Networks (MPNNs) via fine-grained graph pseudo-metrics. These distances capture subtle structural similarities that the usual 1-WL equivalence classes overlook. The key theoretical results show that MPNNs of varying aggregation schemes (sum, mean) remain Lipschitz with respect to these distances, allowing the authors to derive covering number based generalization bounds that account for partial graph similarity rather than a purely binary (same/different) classification. Empirically, the paper demonstrates that these refined metrics yield more accurate coverings and tighter bounds, thereby better predicting real-world generalization gaps. Claims And Evidence: The authors provide clear definitions and statements of theoretical results and provide proofs for all theoretical claims. Their experiments, while somewhat limited in scope and scale, show results consistent with the theoretical discussion; in particular, the coverage size shrinks significantly under these pseudo-metrics, which corresponds to smaller generalization gaps (Tab. 1). This addresses (though does not exhaustively prove) the claim of better real-world bounds. Overall, the main claims are supported by theoretical analysis and experiments, prior work on 1-WL alignment and generalization bounds is discussed in detail. Methods And Evaluation Criteria: The theoretical analysis is overall sound. The data selected for experiments seems appropriate (though somewhat limited in scale and scope). Theoretical Claims: The paper provides clear statements of theoretical claims and proves. Proof sketches/ short descriptions of the proof idea are given in the main text, more detailed analysis is in the appendix. While I have not checked every detail in the proofs in the appendix, the arguments and line of reasoning seems correct, I did not see any obvious gaps or errors. Experimental Designs Or Analyses: The experiments mostly use standard GNN benchmarks (MUTAG, NCI1, etc.) and includes a demonstration of how the covering number changes with the radius parameter. The authors show correlations (Figures 8–11 in appendix) between the distances and MPNN output differences, which supports the Lipschitz-type claim. While sample sizes in these experiments are typical for classification tasks on molecular graphs, it might be useful to see a broader range of data sets, possibly with more variety in graph topologies. But the presented experiments are overall adequate and support the theoretical results. Supplementary Material: I did a cursory read. I did not check all proof details. Relation To Broader Scientific Literature: The paper clearly connects to the established line of work about 1-WL expressivity (Morris et al., Xu et al.). It complements prior work that used coarser, purely discrete distances (i.e., 0 vs. 1 if the graphs are 1-WL distinguishable). The authors also provide comprehensive references to relevant works on GNN generalization. I did not notice major omissions in related literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The introduction of refined metrics that preserve WL-like expressivity but provide more “granularity” is conceptually novel and addresses gaps in prior analyses. - The Lipschitz continuity argument for sum- and mean-aggregation MPNNs is handled neatly and supports a variety of common network layers. - The paper is well-written. Weaknesses: - The paper’s experimental scope is somewhat narrow, focusing on a small set of molecules/graphs; the empirical portion might be strengthened by more extensive analysis, including for other graph topologies. - One potential weakness is the computational complexity of computing pseudo-metrics at scale on large graphs. The authors do mention computational complexity but might elaborate more on possible approximations. Other Comments Or Suggestions: N/A Questions For Authors: see weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We thank the reviewer for their detailed and constructive review.** > One potential weakness is the computational complexity of computing pseudo-metrics at scale on large graphs. The authors do mention computational complexity but might elaborate more on possible approximations. Thank you for highlighting computational complexity. Throughout the paper, we discuss two main distances: the tree distance (with either the cut norm or the l2-norm) and the forest distance. As shown in Lemma 3, the forest distance is equivalent to the Tree Mover’s Distance (TMD) introduced by Chuang et al. This allows us to compute it efficiently using dynamic programming (as shown in the original paper) in time: $\mathcal{O}(t(n) + Lnt(q))$, where $q$ is the maximum degree of a node (in both graphs), $n$ the number of nodes, and $t(m) = O(m^3 log(m))$. The tree distance with the L2-norm can be computed in polynomial time (more precisely, an $\varepsilon$-solution), as detailed in “Interior-Point Polynomial Algorithms in Convex Programming” (Section 6.3.3). On the other hand, the cut norm poses a much more difficult combinatorial optimization problem. Its computation is indeed challenging. One of the most well-known approximation techniques is given in “Approximating the Cut-Norm via Grothendieck’s Inequality”. Additional relevant works will be added to the main paper (“Random Sampling and Approximation of MAX-CSP Problems”, Alon et al., “Quick Approximation to Matrices and Applications”, Frieze & Kannan.) It is worth mentioning that for the theoretical results in our paper, either the tree distance with the cut norm or with the l2-norm can be used interchangeably, as they define the same topology—this is discussed in “Fine-Grained Generalization Analysis.” The revised paper will add a detailed discussion regarding the pseudo-metrics computational complexity. > The paper’s experimental scope is somewhat narrow, focusing on a small set of molecules/graphs; the empirical portion might be strengthened by more extensive analysis, including for other graph topologies This is a good point. However, we view our work as theoretical work. We will try our best to include experiments with other graph structures, **Please consider updating your score if you are satisfied with our answer. We are happy to answer any remaining questions.** --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed rebuttal. I will maintain my score of acceptance.
Summary: This paper first defines three pseudo-distances on graphs compatible with 1-WL or its variants. First, the labeled tree distance, which is an extension of the tree distance to graphs with node features, is defined, and the equivalence with 1-WL indistinguishability is shown. Next, the forest distance is defined using graph unrolling, and the equivalence with 1-WL indistinguishablity up to L-time unrolling is shown. Finally, the mean forest distance is defined as a variant of the forest distance, and the equivalence with 1-MWL is shown. The generalization error bounds for MPPNs are derived using these distances by showing that MPPNs are equicontinuous with respect to the tree distance and Lipschitz continuous with respect to the forest distance. These guarantee the robustness of MPPNs, which in turn gives generalization error bounds in terms of the covering number. Numerical experiments evaluate the relationship between the covering number and WL-indistinguishable graphs, the relationship between the forest distance and the output of MPN, and the relationship between the generalization performance and the generalization bounds. ## update after rebuttal I thank the authors for responding to my review comments. I am satisfied with them. Although the numerical evaluations align with theoretical analyses, I think we need more significance to be eligible for strong acceptance. Therefore, I keep my score (4. accept) Claims And Evidence: The main claim of this paper is that introducing similarity-based distances, such as the labeled tree and forest distances, makes generalization bounds tighter than existing VC-dimension-based bounds. This claim is supported by Proposition 9 (for MPPNs with order-normalized sum aggregation on unlabelled $n$-order graphs), Proposition 15 (for MPPNs with sum aggregation on attributed bounded-degree graphs), and their corollaries. It is claimed that these theoretical results provide insight into the generalization performance of empirical MPPNs. However, as discussed below, I have questions about the design and interpretation of the numerical experiments in Section 5 and think that they are not sufficient as evidence. Methods And Evaluation Criteria: The theoretical analysis is appropriate to support the claim because, in general, the generalization bounds by the covering number are tighter than that by the VC dimension. In addition, Appendix L, in which the bound using the VC dimension is derived from the result of this paper, justifies the claim. On the other hand, the three numerical experiments in Section 5 are not convincing. In particular, I have questions about Q1 and Q3: Q1. We can expect the experiment's results from the definition of the covering number and WL-indistinguishability. Therefore, I have a question whether the observations from this experiment are new. Q2. We see that forest distance and changes in the MPNN output are correlated and that small changes in forest distance do not significantly change output. This observation is consistent with Lemma 14. Q3. The experiment shows that the generalization performance gap and theoretical upper bounds have the same scale. However, we cannot say the tightness of the bound from this observation. Theoretical Claims: I read the proofs of the theorems in this paper, specifically Section E, Propositon 38 in Section J, Propositon 43 in Section K, and Section O, and the lemmas associated with them. However, due to time constraints, I could not carry out a detailed check line by line. Experimental Designs Or Analyses: I checked Sections Q and R. Although $\mathcal{F}_n$ is referred to as a set of graphs, it is not defined in the main text or appendix. I suggest adding its definition. Supplementary Material: I reviewed the sections I mentioned in *Theoretical Claims* and *Experimental Designs Or Analyses*. I also reviewed Section A (Related works) and other sections to check notations. Relation To Broader Scientific Literature: This paper extends the tree distance proposed in [Böker, 2021] and defines labeled tree distance and forest distance. Then, as analogs of the tree distance, the equivalence between these distances and 1-WL indistinguishability (or its variants). Furthermore, the robustness of MPNNs with respect to those pseudo distances is used to show the generalization performance bound of MPNNs for the covering number. This result is a refinement of the bound obtained in [Morris et al. (2023a)], which uses VC dimension. Essential References Not Discussed: As far as I checked the related studies, the coverage of existing studies on generalization analysis for the node and graph prediction tasks is OK. However, I may have missed some important literature since I am less familiar with graph prediction tasks. Other Strengths And Weaknesses: One of the weaknesses of this paper is the excessive appendix (37 pages), which reduces its clarity. For example, as this paper mainly focuses on generalization of MPNNs, the discussion on the mean forest distance seems to be different from its main scope. On the other hand, I am positive about adding explanations on the cut norm, graphon, and WL algorithm in the appendix because they allow readers unfamiliar with these concepts to understand the paper in a self-contained manner. Other Comments Or Suggestions: 1. this paper uses two terms (labeled and attributed) to describe a node with a feature. I want to clarify whether they are different concepts. 2. l.400(right): Unless I am missing something, $\mathcal{F}_n$ is undefined. 3. L.431(left): *How are the Forest distance and MPNN outputs correlated?* is better 4. L.1652 : *The above generalization bound*: I could not identify the bound this sentence refers to. 5. L.1696: *Theorem 35 in the main paper*: Theorem 35 is not in the main text but in the appendix. Questions For Authors: N.A. Ethical Review Concerns: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We thank the reviewer for their fair and constructive review.** > Q1. We can expect the experiment's results from the definition of the covering number and WL-indistinguishability. Therefore, I have a question whether the observations from this experiment are new. You are correct that, by definition, we expect the covering number to decrease with the radius and increase with the order. What is new in our work is that we derive explicit upper bounds on the covering number for different graph families (n-order graphs, otter trees, and an artificial graph class), capturing how it scales with radius and graph order. Specifically: i) Proposition 10 shows that for n-order graphs, the covering number is upper bounded by $m_n/(k+1)$, showing a linear decrease with the radius, and an $\mathcal{O}(m_n)$ increase with $n$. ii) Proposition 12 establishes a stronger bound for Otter trees, showing exponential decay in the radius and again O(m_n) dependence on n. iii) Appendix N introduces an artificial graph class where the covering number scales with m_n/n instead of m_n. Hence, we empirically compute covering numbers in Figure 8 and observe trends that align with the theoretical bounds. These results help validate the tightness of our derived bounds. Indeed, for n-order graphs, the covering number can be bounded by a linear function on the radius, and for otter trees, it can be bounded by an exponential function on the radius. However, the current phrasing of Q1 may not fully reflect this. We will revise it to “To what extent do the empirical covering numbers for different graph families match the theoretical upper bounds derived in Section 4?.” > Q3. The experiment shows that the generalization performance gap and theoretical upper bounds have the same scale. However, we cannot say the tightness of the bound from this observation. Our goal in Q3 was not to claim that the bound is tight in the formal sense, but to show that the bound and the generalization gap are of the same scale in practice. We will clarify this point. > I checked Sections Q and R. > Although $\mathcal{F}_n$ is referred to as a set of graphs, it is not defined in the main text or appendix. I suggest adding its definition. Thank you for pointing this out. $\mathcal{F}_n$ is an artificial graph class defined in Appendix N. We will include its definition in the main text of the revised papers. > One of the weaknesses of this paper is the excessive appendix (37 pages), which reduces its clarity. For example, as this paper mainly focuses on generalization of MPNNs, the discussion on the mean forest distance seems to be different from its main scope. On the other hand, I am positive about adding explanations on the cut norm, graphon, and WL algorithm in the appendix because they allow readers unfamiliar with these concepts to understand the paper in a self-contained manner. We acknowledge that the appendix is quite extensive. However, our goal was to ensure that all notations and definitions used in the proofs are rigorously defined and that the paper remains self-contained for readers with varying backgrounds. For that reason, we will provide some additional discussion on Appendices D, K, and C about the intuition behind cut-norm, graphons, and the WL algorithm, respectively. We defined the mean forest distance to show that mean aggregation MPNNs satisfy the Lipschitz property for a coarser pseudo-metric (mean forest distance), leading to tighter generalization bounds. We will clarify this motivation in the main paper. Overall, we will use the additional page to make the main part more self-contained. > this paper uses two terms (labeled and attributed) to describe a node with a feature. I want to clarify whether they are different concepts. > l.400(right): Unless I am missing something, $\mathcal{F}_n$ is undefined. > L.431(left): How are the Forest distance and MPNN outputs correlated? is better > L.1652 : The above generalization bound: I could not identify the bound this sentence refers to. > L.1696: Theorem 35 in the main paper: Theorem 35 is not in the main text but in the appendix. Thank you for pointing this out. 1. The terms labeled and attributed refer to the same underlying concept, with a minor distinction noted in the background section: labeled typically refers to discrete labels drawn from a finite set, whereas attributed generally refers to continuous feature vectors in $\mathbb{R}^d$. 2.,5. Thank you. We will correct this in the camera-ready version. 3. Thank you for the suggestion. 4. Should have been “improved the generalization bound in \cref{robustness_bound}” referring to the one introduced by Xu & Manor. We will correct this in the camera-ready version. **Please consider updating your score if you are satisfied with our answer. We are happy to answer any remaining questions.** --- Rebuttal Comment 1.1: Comment: I thank the authors for answering my questions and comments. I am satisfied with the authors' responses.
Summary: This paper studies the generalization bound for message passing network. The author extend the generalization framework of using pseudo-metric to graph, with a focus on studying what graph pseudo metric is suitable to obtain a tight bound for the generalization error of MPNNs. Specifically, the author studied tree distance for non-labeled case and tree mover's distance for labeled case, showing that fine-grained and 1-WL expressive graph distance can provide much tighter generalization bound comparing to 1-WL discrete metric. The main contribution contains several parts: 1) successfully applied the generalization framework of Xu & Mannor to graph setting, with defining graph pseudo metric. 2) based on the framework, the key part of tight the bound is to find proper metric with smaller covering number; the author have show the advantage of using tree distance and tree mover's distance. 3) the author also have done fine-grained study for different model architecture and loss function. In the experiment section, the paper shows that their bound is tight for the empirical generalization gap. Claims And Evidence: Without checking the correctness of proofs, I do think the author have made their claims supported, both theoretically and empirically. Methods And Evaluation Criteria: 1. The author have designed several experiments to show that their derived bounds and insights can provide guidance in real-world graph MPNN setting. 2. Nevertheless, it would be interesting to see how sum and ordinary aggregation differ in theory and experiment. Theoretical Claims: No I have not checked. I will not provide confident review on their theorems. Experimental Designs Or Analyses: Yes I have checked. I think the experiments, while being limited, are aligning with the theory presented in the paper. Supplementary Material: I have checked related works and some figures, as well as additional results of experiments. Relation To Broader Scientific Literature: I think the author made a good step of linking several previous works of generalization bound, and make a forward step for MPNN's generalization bound. I value the theory contribution of it to make the generalization study of GNNs more complete. Essential References Not Discussed: I'm not aware of that. But I should also acknowledge that I'm not very familiar with generalization bound literature. When I check the related work section in appendix, I feel the author have mentioned many recent papers of the area. Other Strengths And Weaknesses: ### Strength 1. The author successfully applied modern generalization framework to graph setting, and linked the framework with many recent graph metrics. With good theoretical contribution, the author also confirms that the derived bound is relatively tight in empirical setting. 2. Comparing with previous literature, the usage of fine-grained graph similarity metric moves a great step forward for this GNN generalization area. If all claims and theories shown are correct, the contribution should be great for future studies. 3. The author did great job in literature review and background introduction. 4. Figures presented in the paper are very helpful for understanding equations and key ideas. ### Weakness 1. Some part is not clear enough, and the main content is not self contained without looking at the appendix. For example, the equation in line 187 does not explain what is the definition of unr. Also, the Figure 4 referenced in line 211 are essential for understanding the presented metric, I think the author should move it to the main paper. 2. In the meantime, I think many claims and theorms can be improved with presentation, with simplifying notification and moving minor bounds to appendix. This will make the reading and understanding much better. 3. The section 4, while being the main section of the paper, lacks good structure. The author presents all findings and derivations sequentially in a single flat level. I think this make the reading harder. The author should think about how to present this section much structured and simpler. Other Comments Or Suggestions: I think the author can consider add additional experiments to cover the influence of graph structure, aggregation and loss function. Which is claimed in theory and contribution, but not empirically studied in experiment. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We thank the reviewer for their detailed and constructive review.** > Some part is not clear enough, and the main content is not self contained without looking at the appendix. For example, the equation in line 187 does not explain what is the definition of unr. Also, the Figure 4 referenced in line 211 are essential for understanding the > presented metric, I think the author should move it to the main paper. > In the meantime, I think many claims and theorms can be improved with presentation, with simplifying notification and moving minor > bounds to appendix. This will make the reading and understanding much better. > The section 4, while being the main section of the paper, lacks good structure. The author presents all findings and derivations >sequentially in a single flat level. I think this make the reading harder. The author should think about how to present this section much > structured and simpler. Thank you for pointing out these omissions. We will use an additional page in the camera-ready version to explain the unrolling trees in the main text and refer the reader to their definition in Appendix C. Given the extra page available, we will include an introductory paragraph in Section 4 highlighting the main results and guiding the reader through the section's flow to improve readability. > I think the author can consider adding additional experiments to cover the influence of graph structure, aggregation, and loss function, which is claimed in theory and contribution but not empirically studied in the experiments. Thank you for this suggestion, we will try our best to include these experiments to align theory and practice more. **Please consider updating your score if you are satisfied with our answer. We are happy to answer any remaining questions.**
null
null
null
null
null
null
FlexiClip: Locality-Preserving Free-Form Character Animation
Accept (poster)
Summary: In this paper, the authors propose a new method, named as FlexiClip, to achieve better temporal coherence and geometric consistency in animated clipart. To better preserve motion smoothness without introducing geometric distortions, FlexiClip utilizes Probability flow ODE (pfODE) to model the evolution of temporal Jacobian which incrementally adjust the spatial geometry over time to account for temporal variations. As a key technical contribution, a flow matching loss is further used to optimize the reduction of temporal noise. Qualitative and quantitative comparisons are performed to validate the effectiveness of the proposed method. Claims And Evidence: Please see the comments below. Methods And Evaluation Criteria: Yes. I think the proposed method is technically sound. Theoretical Claims: I have checked the proofs in Section C. Experimental Designs Or Analyses: Yes, please see the **Claims And Evidence** part. Supplementary Material: I have reviewed the appendix and the supplementary materials. Relation To Broader Scientific Literature: I think this submission has already listed sufficient related papers. Essential References Not Discussed: I think the current related work part is okay. Other Strengths And Weaknesses: **Strengths**: 1. This paper is well written. I can follow it easily. 2. From my understanding, I like the basic idea of FlexiClip which is simple and clear. 3. The authors perform sufficient experiments to verify the effectiveness of the proposed method. **Weakness**: I am not an expert in this area so I just have some ideas to help authors refine this work. 1. The visual improvement over AniClipart is limited. For example, in Figure 3, both AniClipart and FlexiClip can synthesize good shapes with adaptive details. I also cannot perceive meaningful differences between these two methods in the supplementary website. 2. The current experimental results can only support rigid motion rather than non-rigid complicated motions. Could you please provide more challenging demo results in the revision? 3. Can the key contribution of FlexiClip be extended to 3D shapes? Can the authors share some insights for it? 4. Could you also conduct some user studies for the ablation study? For some examples shown in Table 6-7, the visual differences between the ablated model and the full model are minor. Adding some user study results could help to achieve comprehensive conclusions. 5. Typos: L22-L23: "Additionally, Recent" -> "Additionally, recent"; --- ## Update After Rebuttal I thank the authors for providing such detailed response which address most of my concerns. Thus I will remain a positive tone towards this submission. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. Below, we provide detailed responses to each of the weaknesses you raised: 1. **Figure 3 Analysis**: Upon closer inspection, you can observe that AniClipart distorts objects (e.g., hand distortion in the boy/girl jumping and woman dancing examples), lacks proper text conditioning (e.g., the parrot), and exhibits poor temporal consistency (e.g., the boy jumping). If the figures are unclear, we encourage you to view the detailed versions on our anonymous webpage: https://creative-gen.github.io/flexiclip.github.io/. 2. **Additional Results**: The webpage contains more extensive results, showcasing a variety of motion complexities, including couple dancing and two-object interactions. We are also in the process of adding more results, including 3D motion animations. 3. **3D Extension**: Thank you for your question regarding 3D extensions. FlexiClip can be easily adapted for 3D animation, as it only requires adding one extra coordinate and learning the Bézier trajectory over it. This modification is straightforward for our approach. We will release the code with configurable 2D and 3D settings. 4. **User Studies on Ablation Results**: We will conduct user studies on the ablated results as well. However, if you are referring to minor differences between the ablated and full models, we recommend checking the videos on our webpage. While the images may not clearly highlight these differences, the video outputs of the full and ablated models exhibit significant variations. We will include these results in the final draft for further user study. 5. **Typographical Errors**: We appreciate your feedback regarding typos and will carefully proofread the final revision to ensure all errors are corrected. Once again, we sincerely appreciate your time and effort in reviewing our paper. We kindly request you to consider our responses and, if appropriate, please increase your score accordingly. --- Rebuttal Comment 1.1: Comment: Dear Authors, I thank you for your feedback and trying to address my concerns. While the last three points have been addressed, I still am worried about the first two points. Probably it is somewhat subjective but I still cannot perceive big differences in terms of visual comparisons. Also, the presented motions are also a bit simple. As I am not an expert in this area, I would make the decision upon other reviewers' comments. --- Reply to Comment 1.1.1: Comment: Thanks for letting me know this. Here I am detailing more on the first two points: **Your first point regarding Figure 3** In Figure 3 there are four animations lets go over each of them one by one: (I request again to see all these animations on the GitHub page: https://creative-gen.github.io/flexiclip.github.io/.) 1. In the first animation there is a woman dancing if you compare both methods in AniClipart generates animation where hand of girl is distorted while it is not the case with our proposed solution FlexiClip. 2. In the second animation we are asking in the prompt to flap the wings of the bird, but if you see the animation generated by AniClipart it is only flapping the tail not the wings hence there is poor text alignment there, while the animation generated by FlexiClip is actually flapping the wings this is the clear difference between animations generated by AniClipart and FlexiClip. 3. In the third animation we wanted the boy to jump and hit the shuttlecock and in order to do that the requirement is there must be no (absolute zero) distortion in any of its body part. But if you check the animations generated by AniClipart it is distorting the hands of the boy while FlexiClip is not at all distorting any its body parts. See the actual motion on the page as well as the FlexiClip motion is much smoother than the one generated by AniClipart. 4. In the fourth animation, we wanted the girl to throw the handball while jumping, but again here the animation generated by AniClipart is distorting the hands of the girl while FlexiClip is generating the zero distortion animation and if you check the actual animation in GitHub page the jump is much more smoother rather than quick and abrupt jump in AniClipart. **Your Second Point regarding simple animations** I again request you to please watch the GitHub page: https://creative-gen.github.io/flexiclip.github.io/ 1. There are four animations in which two persons (couple, multi-object) animation is shown to animate in a single go. This is not simple since generating the multi-object animation requires modelling two persons at the same time and make there motion coherent to each other with just a simple SDS loss is a fairly complex task. 2. There are other animations like girl swaying on the hammock is also another example of complex task since the hammock and girl movement are generated in sync with each other. 3. Additionally, we showed the multi-condition animation as well where we show the single animation to follow multiple conditions specified in the prompt. 4. The swaying example of flower with rotation shows the additional level of complexity we have introduced. Request you to please try to go over the GitHub page: https://creative-gen.github.io/flexiclip.github.io/ Again, we value your time and energy it requires to review our paper. Many thanks for reviewing our paper.
Summary: The paper proposes FlexiClip, a novel approach designed to overcome these limitations by addressing the intertwined challenges of temporal consistency and geometric integrity, which extends traditional Bezier curve-based trajectory modeling with (1) temporal Jacobians to correct motion dynamics incrementally, (2) continuous-time modeling via pfODEs to mitigate temporal noise, and (3) a flow matching loss inspired by GFlowNet principles to optimize smooth motion transitions. The method sets a new standard for clipart animation. Claims And Evidence: Yes. The paper's core claims about FlexiClip are well-supported by compelling evidence. Visual identity preservation is validated through higher CLIP scores (0.9563 vs 0.9401) and 94.90% user preference.Temporal consistency improvements are demonstrated via lower TC scores (8.14 vs 8.51) and effective ablation studies. Text-video alignment claims are substantiated by improved X-CLIP scores (0.2102 vs 0.2075) and 94.54% user preference in alignment ratings. Methods And Evaluation Criteria: Yes, the FlexiClip method makes sense to me. The bitmap metrics, animation metrics used and the proposed deformation smoothness and animation energy also make sense to me. Theoretical Claims: Yes. The main theoretical foundation in Section 2.2 and Appendix C, where the authors develop their continuous-time modeling approach using probability flow ODEs. I didn't find any obvious errors in their mathematical derivations. Experimental Designs Or Analyses: Yes, the comparison and ablation study are efficient. The sample size (30 participants) for the user study is moderate but acceptable. Supplementary Material: Yes. External ablation studies, qualitative comparison between FlexiClip and T2V/I2V models. Relation To Broader Scientific Literature: The FlexiClip paper makes three key innovative contributions to animation research: (1) It introduces temporal Jacobians that incrementally correct spatial deformations, addressing the frame-to-frame inconsistency problems in prior works like AniClipart (Wu et al., 2024); (2) It adapts probability flow ODEs (originally from generative modeling by Song et al., 2020) to animation contexts, creating a continuous-time framework that preserves geometric structure during complex movements; and (3) It implements a flow matching loss inspired by GFlowNet principles (Bengio et al., 2023) that optimizes smooth transitions between frames.The significance lies in FlexiClip's mathematical formulation that unifies spatial and temporal modeling, effectively solving long-standing challenges in clipart animation where visual identity preservation must be balanced with natural motion dynamics. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Other strengths: The development of novel metrics (DS and AE) shows thoughtfulness in how animation quality should be measured. The paper clearly identifies and addresses a specific limitation in prior work, the trade-off between spatial coherence and temporal consistency in clipart animation. Other weaknesses: The evaluation focuses on relatively simple character animations. Other Comments Or Suggestions: N/A Questions For Authors: How sensitive is the method to the choice of hyperparameters, particularly the λ weight balancing the SDS loss and flow matching loss? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Sensitivity of λ in Balancing SDS Loss and Flow Matching Loss** Thank you for your thoughtful review and for raising this important question regarding the sensitivity of our method to the hyperparameter $\lambda$, which balances the SDS loss and flow matching loss. Through our experiments, we observed that tuning $\lambda$ is crucial for balancing trajectory alignment with text and ensuring smooth motion. In our setup, we set $\lambda$ = 15 for consistency across all examples. 1. When $\lambda=1$, the model requires significantly more gradient updates—often up to 1000 steps—to generate acceptable motion. 2. Increasing $\lambda$ up to 5 does not significantly reduce the number of required steps. 3. Raising $\lambda$ from 5 to 10 introduces some variability, where good results can sometimes be achieved in 600 steps, but in other cases, up to 900 steps are needed. 4. At $\lambda=15$, nearly all generated motions align well with the text prompt at around 700 steps. 5. Further increasing $\lambda$ results in sharper and faster motion, which appears less smooth. We appreciate this insightful question and will use the additional page allowance in the final paper to provide further details and examples. Additionally, we invite you to explore our GitHub page (https://creative-gen.github.io/flexiclip.github.io/), where we showcase more complex motion examples beyond those currently visible in our paper which are deemed to be simpler animations. We sincerely appreciate your valuable feedback. We kindly request you to consider our responses and, if appropriate, please increase your score accordingly. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed explanation on λ sensitivity. I look forward to your revised version with additional details on hyperparameter tuning. Additionally, after reviewing your GitHub page and observing the dynamic results, I am convinced of the effectiveness of your approach. The complex motion examples are impressive and clearly demonstrate significant improvement compared to existing work like DynamiCrafter and LTXvideo. I would like to upgrade my recommendation to Accept for your work. --- Reply to Comment 1.1.1: Comment: Thank you so much, and again many thanks for reviewing our paper, we appreciate the time and effort you put in to review the paper carefully.
Summary: The paper proposes FlexiClip, a novel method for animating clipart images while preserving temporal coherence and geometric integrity. It extends existing approaches by incorporating temporal Jacobians for incremental motion correction, probability flow ODEs (pfODEs) for continuous-time modeling, and flow matching loss inspired by GFlowNet principles. The paper claims that these enhancements lead to smoother, more consistent animations while avoiding the geometric distortions and temporal artifacts found in previous methods like AniClipart. The experimental results showcase the advantages of FlexiClip in producing better animations across diverse clipart types. This paper doesn't include Impact Statements, which are required by ICML. Therefore, I suggest rejection. Claims And Evidence: Most of the claims are proved via experiments and results. While there are some geometric distortions that occur in the results. Methods And Evaluation Criteria: Yeah, the methods make sense to me. Theoretical Claims: None Experimental Designs Or Analyses: Yes, the experiments looks sound to me, and this paper follows the experiment setup of an existing work. Supplementary Material: Yes, all supplementary material. Relation To Broader Scientific Literature: I'm not familiar with this area, so it's hard for me to write this part. Essential References Not Discussed: I'm not familiar with this area, so it's hard for me to write this part. Other Strengths And Weaknesses: Strengths: The use of probability flow ODEs (pfODEs) provides a mathematically grounded way to ensure temporal smoothness. The flow matching loss based on GFlowNet principles is a reasonable adaptation. Weakness: Some results show noticeable distortions, particularly in human characters (e.g., hands, faces, and body parts deform unnaturally). While motion smoothness is analyzed, the paper lacks a quantitative evaluation of motion accuracy. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your review we will include the following impact statement, basically we did'nt included the impact statement since we thought that it will count towards page limit during review process and can be added easily when accepted since 1 page extra will be given to submit the accepted paper to address the reviewer comments. Also, if you read this https://icml.cc/Conferences/2025/CallForPapers, that reviewer does not have authority to reject submission based on missing impact statement only program chairs can do that. Also there is this FAQ also there is this FAQ which clarifies that we did not desk-reject any papers for missing impact statements: https://icml.cc/Conferences/2025/PeerReviewFAQ. Please find the below the detailed impact statement for our work, we request you to please revise your acceptance score for this paper: The innovations in FlexiClip have significant implications for multiple domains: **Animation Industry** FlexiClip could streamline production processes for 2D animation, potentially reducing the labor-intensive nature of traditional animation while maintaining artistic quality. The technology could be particularly valuable for independent animators and small studios without access to extensive resources. **Digital Content Creation** As online platforms continue to prioritize dynamic content, FlexiClip's approach could enable creators to efficiently produce engaging animated content from static images, expanding creative possibilities for digital storytelling, marketing, and educational resources. **User Interface Design** The locality-preserving aspects of FlexiClip could inform new approaches to animated user interfaces, where maintaining visual coherence during transitions between states is crucial for user experience. **Broader Implications** Beyond its immediate technical contributions, FlexiClip represents a significant step toward bridging traditional animation principles with modern computational techniques. By addressing the intertwined challenges of temporal consistency and geometric integrity, the research sets a new standard for character animation that respects both the artistic qualities of the original image and the physical plausibility of movement. The paper "FlexiClip: Locality-Preserving Free-Form Character Animation" introduces transformative advancements in the field of clipart animation, addressing critical challenges such as maintaining visual fidelity and ensuring temporal coherence in character motion. FlexiClip achieves smooth, natural-looking animations while preserving the structural integrity of clipart images. Additionally, its integration with pre-trained video diffusion models through methods like Video Score Distillation Sampling (SDS) bridges traditional animation principles with cutting-edge machine learning frameworks. These innovations have far-reaching implications for the animation industry, enabling streamlined workflows for animators and reducing the labor-intensive nature of traditional animation processes. FlexiClip's ability to generate high-quality animations from static clipart images expands creative possibilities for digital content creation, marketing, and educational applications. Furthermore, its locality-preserving approach holds promise for enhancing animated user interfaces, improving transitions and user experiences. Beyond practical applications, FlexiClip sets a new benchmark in character animation by addressing intertwined challenges of temporal smoothness and geometric fidelity. Its robust framework not only advances artistic expression but also paves the way for future developments in computer graphics, animation technology, and related fields such as robotics and biomechanics. By redefining standards for seamless motion generation, FlexiClip establishes itself as a foundational contribution to the evolution of animation techniques in the digital age. --- Rebuttal Comment 1.1: Comment: I understand that ICML does not desk reject papers solely for missing impact statements, so I will adjust my score accordingly. That said, I noticed the authors did not respond to the weakness I raised earlier, and I’m wondering if there’s a particular reason for that. --- Reply to Comment 1.1.1: Comment: Thanks for your acknowledgement on Rebuttal and many thanks on reminding us about the weakness part you have asked in your review. Please find below the answer for your question on Motion Accuracy. The motion accuracy is precisely the indicator how well the animation follows the text prompt with valid motion trajectory and that's what we have evaluated with Subjective User Study presented in Table 3 where we have evaluated Text Video Alignment by getting the rating from the annotators and they are provided with the instructions to judge that the character should follow the valid and motion trajectory as per the prompt. There is no general open source model to do that and the valid trajectory requires the visual inspection and hence we believe the animations demonstration side by side with the text prompt should be evaluated manually by the annotators to get the fair judgement on this. We invite you to see our project page https://creative-gen.github.io/flexiclip.github.io/ with detailed animations side by side with the prompt to get a feel for the smooth and accurate animations. Hope this answer resolves your ask on Motion accuracy.
Summary: This paper addresses several key challenges in the the problem of clipart animation. To address the noise accumulation along the animation, the paper proses the novel concept of temporal Jacobians to correct the temporal noise. To ensure the smooth temporal transitions between frames, the paper proposes pfODE to model the temporal Jacobians. A novel flow matching loss is introduced to optimize the temporal noise. The result: high quality clipart animation with great temporal coherence. The experiments are quite comprehensive. The major technical claims from the paper are all well validated. Update after rebuttal: after reading other reviewers' comments and the authors' rebuttal, my rating of acceptance remains unchanged. Claims And Evidence: I find that all the major technical claims are supported by intuitions and experimental validation. Claim 1: the temporal Jacobians idea is novel and it is the base to a solution to temporal noise accumulation. To my knowledge, this novelty claim is true, and the effectiveness is validated in experiment Sec. 4.4 - removing this part leads to noticeable temporal artifacts. Claim 2: the pfODE formulation and the flow matching loss enables temporal smoothness. This claim is supported by the change in several metrics after removing this loss from training (Sec. 4.4). Claim 3: the proposed method suits multiple scenarios including multi-object, layered objects and rotation. These are demonstrated in experiment Sec. 4.5 Overall, I find this paper solid in making and validating the claims. Methods And Evaluation Criteria: Overall the method makes a lot of sense to me. The paper first identifies the key problem with previous clipart animation methods -- using rigid parameterization (such as the ARAP prior in AniClipart) for motion, and do not explicitly account for the noise accumulation over time. It is a great idea to decompose the deformation Jacobian into the spatial one and per-time-step corrective Jacobians, with the latter being optimized using flow matching loss -- treating the spatial Jacobian as the backward process that introduces temporal noise and the forward process reduces such noise. The formulation is elegant and effective. As for evaluation, the paper compares with a number of recent methods on the same (clipart animation) or neighboring (video generation) tasks. The metrics are chosen to well reflect the major improvements brought by this paper -- temporal coherency, deformation smoothness, and the capability to preserve identity. These design make the evaluation results convincing. Theoretical Claims: The major theoretical claims (implicitly made) are that (1) The spatial + temporally corrective Jacobians are suitable representation for this task; (2) the probability flow ODE is suitable to model the evolution of the temporal Jacobians, and (3) the flow matching loss can effectively minimize the temporal noise. These theories are outside of my major research domain, but I tried to understand them reading through Sec. 2 and 3 and the analysis and derivations make sense to me. Would love to discuss and learn from other reviewers to give a solid assessment on this part. Experimental Designs Or Analyses: I appreciate the well-organized experiment section. The metrics are clearly defined and they are well-designed to reflect different aspects of animation -- the extent of the motion (AE, MV), temporal coherency (TC, DS) and visual identity preservation. The new method is in-depth compared with the most recent baseline, AniClipart (Sec. 4.3). The findings in this section well align with the motivations and claims in the paper introduction. User study further corroborates the superior perceptual quality of the new method. The ablation study is the key evaluation -- confirming that the three major components in the proposed method are effective and necessary. In all, the experiment section is strong. Supplementary Material: I reviewed the extended qualitative results and details of the ablation study in the appendix. It's a pity that the provided website isn't available; otherwise I'd be very interested to see the real animated results. Relation To Broader Scientific Literature: The paper proposes a novel and effective method for clipart animation from single image input. It can be of great interest to the computer graphics community as a novel alternative for automatic character animation. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: I find this paper very solid: practical problem, clever idea, and extensive experiments that validates all the major technical novolties. The paper is self-contained and well written. I'd recommend acceptance. Other Comments Or Suggestions: I do not have other major comments. Questions For Authors: Is the method applicable for 3D mesh deformation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Many thanks for reviewing our paper and for your detailed and thoughtful evaluation. I truly appreciate the time and effort you put into this, as well as your unbiased rating of our work. Regarding your question about the 3D extension, FlexiClip can easily be adapted for 3D animation. This requires adding just one extra coordinate and learning the Bézier trajectory over it, which is straightforward for our approach. In fact, we plan to release the code with configurable 2D and 3D settings. To specifically address your question—yes, our method can be extended to 3D mesh deformation, and we will include the 3D extension in our code release. Let me know if you need any further refinements!
null
null
null
null
null
null
Novelty Detection in Reinforcement Learning with World Models
Accept (spotlight poster)
Summary: This work proposes a novelty detection technique in model-based reinforcement learning called world models. It refers to sudden changes in the world model's visual properties or dynamic system as novelties. They use KL divergence between latent predictions with and without observable ground truth to design the novelty detection bound. The proposed technique is not dependent on additional hyper-parameters, and experiments are carried out on NovGrid, HackAtari, and RealWorldRL Suite. Claims And Evidence: Mostly, claims are supported by experimental results i.e., 1. Even though Proposition 3.1 is supported by explanation and visualization of novelty detection bound in Figure 1. The quality of the paper can be improved further if authors come up with some guarantee on the bound. 2. Referencing Srivastava et al. 2014 for Proposition 3.2 is not enough, a little bit more explanation is needed to make the paper self-sufficient. Methods And Evaluation Criteria: The proposed novelty detection bound has been evaluated on NovGrid, HackAtari, and RealWorldRL Suite, these environments are suitable for the proposed method. Theoretical Claims: The claims look ok, apart from the issues listed in the Claims And Evidence section. Experimental Designs Or Analyses: The experimental designs for novelty detection of visual properties and dynamic systems are ok. Authors use the false positives rate, average delay error, AUC scores, and inference run-time speedup for evaluation of proposed novelty detection technique. Supplementary Material: I have checked the B (novelty detection for other world model frameworks) and D (derivations) both look ok. Relation To Broader Scientific Literature: Most of the prior novelty detection methods need environment specific parameter turning. The proposed method doesn’t need the environment specific parameters turning, which is a good advantage. Further, novelty detection is under explored in the model-based RL settings. Hence, this work is relevant to the community. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The main strength of the paper is that the proposed technique doesn’t require environment specific parameters/threshold tuning. The main weakness, as noted above, is that the authors don’t provide rigorous proof of the novelty bound. Other Comments Or Suggestions: Typos or errors, 1. Ln 112 - col 1: signal.(Greenberg & Mannor, 2020) This 2. Ln 92 to 96 - col 1: complex sentence to get the meaning, so rephrase or at least give ref to eq 3. 3. Ln 215 - clo 2: (divergence of blue collapses to red)? or blue / differences collapse to orange? Questions For Authors: Please see above, including the Claims and Evidence section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their suggestions on improving our work. In response, we outline provisional revisions below: First, we note that we have fixed each of the typos and syntax errors that were identified. Specifically, we have: * Moved the period to after the citation. * Rephrased Ln 92 to 96 - col 1 to: To develop the bound, we observe that, under nominal conditions, any divergence should be smaller than that of the predicted latent world state computed with the initial hidden state, as the latter prediction becomes increasingly inaccurate. * Rephrased Ln 215 - clo 2 to: (blue differences collapse to orange). Regarding the following statement: ``` Referencing Srivastava et al. 2014 for Proposition 3.2 is not enough, a little bit more explanation is needed to make the paper self-sufficient. ``` We first reword Proposition 3.2 to align with terminology used by other works [2]: *If the cross entropy score comparison becomes **nonnegative** when introducing the vector $x_t$, then 4 defines a decision boundary in the latent space over the measure of robustness to partial destruction of the input (Vincent et al., 2008; Srivastava et al., 2014), i.e.:* $KL[p_{\phi}(z_t|h_t,x_t)||p_{\phi}(z_t|h_t)] + KL[p_{\phi}(z_t|h_t,x_t)||p_{\phi}(z_t|h_0,x_t)] \leq\\ KL[p_{\phi}(z_t|h_t,x_t)||p_{\phi}(z_t|h_0)]$. and have extended the derivations section in the appendix to include a more explicit tone: Let the full posterior $p_{\phi}(z_t|x_t,h_t)$ be the most informed distribution and the desired distribution with support over the latent space $\mathrm{Z}$ at any given $t$, and let $f_\theta$ map to the representation of a distribution with the support of $\mathrm{Z}$ as well. Utilizing notation similar to that in Vincent et al. (2008), denote $(h_t,x_t)$ as the clean input $x$ (not to be confused with $x_t$), and define three variants of $x'$: $\left\lbrace x_{(-h_t)}', x_{(-x_t)}', x_{(-h_t,-x_t)}' \right\rbrace$, which denote the removal of features $(h_t)$, $(x_t)$, and $(h_t,x_t)$ from the clean input $x$, respectively. Since the loss is at the latent level, we measure the robustness to partial destruction criterion of a single input $x$ for a desired distribution as: $L(p_{\phi}(z_t|h_t,x_t), f_{\theta}(x'))$ Therefore, if $L$ is chosen to be the KL divergence, and $x'$ becomes explicit, then we can derive Proposition 3.2 as: $L(p_{\phi}(z_t|h_t,x_t), p_{\phi}(z_t|h_0)) \geq L(p_{\phi}(z_t|h_t,x_t), p_{\phi}(z_t|h_t)) + L(p_{\phi}(z_t|h_t,x_t), p_{\phi}(z_t|h_0,x_t)) =$ $$KL(p_{\phi}(z_t|h_t,x_t) \parallel p_{\phi}(z_t|h_0)) \geq KL(p_{\phi}(z_t|h_t,x_t) \parallel p_{\phi}(z_t|h_t)) + KL(p_{\phi}(z_t|h_t,x_t) \parallel p_{\phi}(z_t|h_0,x_t))$$ where $p_{\phi}$ corresponds to the distribution modeled by $f_{\theta}$ given some $x^{'}$, and $h_0$ represents the state of $h$ when no conditioning information is available. ``` The quality of the paper can be improved further if authors come up with some guarantee on the bound. ``` Finally as suggested, we agree that we should attempt to give the reader intuition surrounding our method by extending the appendix with the following theoretical guarantee which shows that for a theoretical perfectly trained world model (i.e: $z_t \perp x_t \mid h_t$ and there exists a set of parameters $\phi^* $ such that: $ p_\phi(\cdot|\cdot) = p^*(\cdot|\cdot)$ where $p^*( \cdot | \cdot)$ denotes the well defined true conditional distribution) then EQ 4 holds if: $ E_{p_\phi(z_t \mid h_t,x_t)}\left[\log \frac{p_\phi(z_t \mid h_0,x_t)}{p_\phi(z_t \mid h_0)}\right] \ge 0. $ Which can be interpreted as holding if the Expected Information Gain (EIG) of $x_t$ is nonnegative under the distribution of $p_\phi(z_t \mid h_t,x_t)$. Ideally, although as an informal side note, we would like to intuitively interpret $KL[p(z_t |h_t x_t )|| p(z)]$ as proportional to $I(z_t;h_t,x_t)$ (which is often a substitution in practice [1]). Where * $KL[p(z_t |h_t, x_t )|| p(z_t)]$ is substituted by $I(z_t;h_t,x_t) = H(z_t) - H(z_t|h_t,x_t)$ * $KL[p(z_t |h_t, x_t )|| p(z_t | h_t)]$ is substituted by $I(z_t; x_t | h_t) = H(z_t|h_t) - H(z_t|h_t,x_t)$ * $KL[p(z_t |h_t, x_t) || p(z_t | x_t)]$ is substituted by $I(z_t;h_t | x_t) = H(z_t|x_t) - H(z_t|h_t,x_t)$ Therefore, if showing (EQ 4) was somehow related to showing: $H(z_t) - H(z_t|h_t,x_t) \geq H(z_t|h_t)- H(z_t|h_t,x_t) + H(z_t|x_t)- H(z_t|h_t,x_t)$ Then simplification would show: $H(z_t) - H(z_t|h_t) \geq H(z_t|x_t) - H(z_t|h_t,x_t) \implies$ $I(z_t;h_t) \geq I(z_t;h_t|x_t) (*)$ Where a potential key result would lie: EQ 4 holds if knowing X reduces the mutual information between Z and H. ``` [1] Zhao, S., Song, J., & Ermon, S. (2019). InfoVAE: Balancing Learning and Inference in Variational Autoencoders. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5885-5892. [2] Vincent, P., Larochelle, H., Bengio, Y., and Manzagol, Extracting and composing robust features with denoising autoencoders. ICML 2008 ```
Summary: The paper seeks to determine when there is novelty in an environment by using world models, and particularly when there is high prediction error with such world models. Such an approach aligns strongly with existing neuroscience work. The paper demonstrates with strong results their approach, exceeding the performance of most existing approaches. ## Update after rebuttal The author's successfully responded to my questions and the questions of the other reviewers. I feel confident in my score of a 4 and in the broader merit of the paper. Claims And Evidence: The claims in the submission are very strongly supported by good evidence. The authors used strong baselines and benchmarks and I did not find any issues with any claims/experiments. Methods And Evaluation Criteria: The benchmark datasets and evaluation criteria used was standard and makes sense for the problem. Theoretical Claims: I checked the correctness of all proofs in the main paper. Experimental Designs Or Analyses: I checked the validity of all experiments in the main paper and did not find any issues. Supplementary Material: No Relation To Broader Scientific Literature: The paper is broadly related to world models, Reinforcement Learning and particularly Model Based reinforcement learning, as well as novelty detection/uncertainty. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strengths - The approach is very simple and elegant, aligning with neuroscience ideals of uncertainty/novelty detection guided by high prediction errors - The results strongly support the claims of the authors, and the metric has several potential downstream use cases for world modeling, including determining whether the world model is properly using history (Figure 1) as well as whether the world model has converged or not. ## Weaknesses - While the approach is simple, that also means the contribution is rather limited. I also see this as a small weakness - The authors used an older version of dreamer (V2 instead of V3) which begs the question--does the proposed approach work for newer MBRL based approaches? - The proposed approach is not compatible with raw inputs (i.e. when using diffusion) or non probabilistic latents. I also see this as a small weakness Other Comments Or Suggestions: - The authors state "We introduce a novel technique for novelty detection without the need to manually specify thresholds, and without the need for additional augmented data." but then they later state "Our technique calculates a novelty threshold bound" so there seems to still be a reliance on the novelty threshold bound? I'm unsure clarification would be helpful. Questions For Authors: - Why is the ground truth for Table 1 so noisy? - I don't understand this point "in nominal conditions, any divergence should always be smaller than the divergence of the predicted latent world state computed with the initial hidden state instead of the current initial hidden state; the latter prediction should be increasingly inaccurate." Can the authors elaborate on this? - Why did they use DreamerV2 instead of the newer DreamerV3? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review, hopefully we can interpret your questions and clarify as well as possible: * Why is the ground truth for Table 1 so noisy? * The ground truth component of the table shows the true observation given to the agent, specifically the noisy observation is a sample from the noisy environment that occurs when we inject the novelty ‘noise high’ in the humanoid setting. * I don't understand this point "in nominal conditions, any divergence should always be smaller than the divergence of the predicted latent world state computed with the initial hidden state instead of the current initial hidden state; the latter prediction should be increasingly inaccurate." Can the authors elaborate on this? * Thanks for pointing out that this statement might be confusing to the reader, with that statement we are attempting to describe our observation in Figure 1. In response we have rephrased the statement to be: “to develop the bound, we observe that, under nominal conditions, any divergence should be smaller than that of the predicted latent world state computed with the initial hidden state, as the latter prediction becomes increasingly inaccurate.” * The authors state "We introduce a novel technique for novelty detection without the need to manually specify thresholds, and without the need for additional augmented data." but then they later state "Our technique calculates a novelty threshold bound" so there seems to still be a reliance on the novelty threshold bound? I'm unsure clarification would be helpful. * No problem. To clarify, we are stating that the user does not have to be mindful of explicitly/manually setting a measurement of uncertainty $\lambda$ beforehand to trigger the detection of novelty. Our technique will simultaneously handle the bound (as well as the detection), while minimizing false positives. * The authors used an older version of dreamer (V2 instead of V3) which begs the question--does the proposed approach work for newer MBRL based approaches? … Why did they use DreamerV2 instead of the newer DreamerV3? * We consider our work on the V2 algorithm to be mainly based on the RSSM component of the world model, and although the major contributions of going from the Tensorflow V2 to the JAX V3 introduced faster optimizations, this move would have made the experimental design more complex despite both V2 and V3 being centered around the core RSSM framework. Therefore, for newer MBRL approaches, we believed it was more impactful to take time to test the newer Diamond (2024) [1], and IRIS (2023) [2] world models, which unlike DreamerV3 or V2, do not use the RSSM framework. * The proposed approach is not compatible with raw inputs (i.e. when using diffusion) or non probabilistic latents. I also see this as a small weakness. * For diffusion based approaches, we found that the PP-MARE method (see Table 7, Section 4 PP-MARE, and Appendix B paragraph Diamond) could successfully be used directly on raw inputs. As for non-probabilistic latents, (possibly used if it is expected that the environment is deterministic and fully observable), then our setting of being in a stochastic partially observable setting no longer holds. However, we hypothesize that a subtle extension could be that the KL loss could be substituted with a binary operator (or suitable deterministic metric) over the posterior and prior representations, when conditioned on different information. * While the approach is simple, that also means the contribution is rather limited. I also see this as a small weakness * Thanks for the feedback. We purposely crafted the main method to be simple, but to the best of our knowledge, we are the first to test novelty detection in both Atari and 3D-DMC environments entirely in the image space (see footnote 2). We also introduce PP-MARE for those who are interested in working outside of the latent space, and are okay with specifically defining their threshold. ``` [1] Alonso, E., Jelley, A., Micheli, V., Kanervisto, A., Storkey, A., Pearce, T., and Fleuret, F. Diffusion for world modeling: Visual details matter in atari. 2024. [2] Micheli, V., Alonso, E., and Fleuret, F. Transformers are sample-efficient world models. In The Eleventh International Conference on Learning Representations, 2023. ``` --- Rebuttal Comment 1.1: Comment: [Accidentally posted as official comment] The authors successfully responded to my comments and questions and I feel confident keeping my score as a 4, especially understanding better now that there are no bounds to be manually tuned which I believe is a strong advantage.
Summary: The work proposes a principled method for detecting novelty in RL agents that use latent dynamics models, such as DreamerV2. The central idea is that when an agent encounters novel observations or dynamics, the latent state inferred from the current observation (posterior) will significantly differ from the one predicted by its internal dynamics (prior). This mismatch is captured through the KL divergence between the posterior and prior latent distributions. Rather than relying on heuristic thresholds, the authors introduce theoretically motivated bounds that relate this KL divergence to priors conditioned on reduced information — namely, with no history or only the current observation. These bounds are formalized through Propositions 3.1 to 3.2, enabling novelty detection without supervision or tuning. The paper evaluates this method across several domains, including DeepMind Control Suite, MiniGrid, and Atari, introducing both observation and transition-based novelties mid-trajectory. The results show that the proposed KL-based approach reliably detects novelty with low false positive rates and sharp response times, outperforming baselines like RIQN. A key contribution is the demonstration that a well-trained latent world model can serve as a robust internal monitor of novelty, offering a unified, efficient, and interpretable mechanism to detect both perceptual and dynamic anomalies in reinforcement learning environments. ### Update after rebuttal I am confident in my assessment of accepting this paper, It's a good work. Claims And Evidence: The claims made in the submission are largely supported by clear and convincing empirical evidence. The authors claim that their KL-based novelty detection method, grounded in the latent space of a Dreamer-style world model, is capable of reliably identifying both observation-level and dynamics-level novelties across a range of environments. This is substantiated through a series of well-designed experiments on DMC, MiniGrid, and Atari, where the proposed method outperforms baselines. The per-timestep KL score clearly spikes in response to injected novelties (can be seen even during training). Tables 2-6 show clear quantitative results where the method is able to almost instantaneously detect a novelty in most cases. That said, while the empirical results are strong, some of the theoretical claims—particularly Propositions 3.2 —rest on assumptions about the behavior of KL divergence that may not always hold in practice (discussed later), such as its non-metric nature and dependence on variational approximations. This might not be a major flaw but should be talked about in the paper in my opinion. Nonetheless, the main claims regarding the method’s novelty detection performance are well-supported by the presented evidence across a variety of setups. Methods And Evaluation Criteria: Yes, the paper uses a sufficient set of baselines, datasets, and methods for the problem at hand. The baselines are good, covering a broad spectrum of novelty detection strategies. The evaluation spans three well-established benchmarks—MiniGrid, DeepMind Control Suite, and Atari—which together capture a wide range of perceptual and dynamic challenges. The proposed method is integrated naturally into DreamerV2, and its performance is compared rigorously across these environments, demonstrating the method’s generality and robustness. They also test the effectiveness of their approach on other world model approaches and the algorithm works well (PP-Mare variant) on diffusion based but struggles on Transformer based which was also an observation by a previous work. An important issue here is: **authors mention an Appendix H but there is none present**, that possibly discussed transformer based world models and KL bound for novelty detection. I will recommend authors to add that for more context. Theoretical Claims: The paper includes several theoretical claims in the form of Propositions 3.1 and 3.2, which are used to justify bounds for novelty detection based on KL divergence between posterior and prior latent state distributions. The propositions are intuitive and well-motivated, but some rely on assumptions that are not fully formalized. While Proposition 3.2 presents an intuitive inequality that resembles a triangle inequality over KL divergences, it is important to note that **KL divergence is not a true metric and does not satisfy the triangle inequality** in general. The bound in Proposition 3.2 compares the posterior to three different priors—one conditioned on memory, one on the current observation, and one on neither—and assumes that if the posterior is close to each of the simplified priors individually, then it should also be close to the full prior. However, this assumption is structurally similar to triangle inequality reasoning, which is not formally valid in KL space due to its asymmetry and lack of metric properties. As a result, the inequality may not hold even when all distributions involved are close in distributional terms. This weakens the theoretical guarantee of the proposition and suggests that violations of the bound might occur not only due to novelty, but also due to the non-metric behavior of KL divergence, particularly in high-dimensional or undertrained latent spaces. However, empirically, I don't think this is a big issue but I will still recommend authors to discuss a bit more on this, especially in the derivations. Experimental Designs Or Analyses: The experimental design and analysis appear generally sound and well-aligned with the paper’s goals. The authors evaluate their method across three diverse benchmarks—MiniGrid, DMC, and Atari—with injected observation and transition novelties that simulate realistic distribution shifts. The novelty is introduced mid-trajectory, allowing for precise analysis of detection timing. The use of per-timestep KL divergence as a novelty score, along with aggregated trajectory-level evaluations (e.g., AUC, false alarm rate, detection delay), provides a decent measure of novelty detection. The comparison against multiple baselines is thorough ensuring a fair and comprehensive evaluation. Overall, the key metrics and qualitative analyses convincingly support the claims, and no major flaws are apparent in the experimental methodology. Supplementary Material: Yes, I reviewed the derivations and the other sections. Section C is particularly interesting given its discussion on exploration. I believe for the KL-bound will fail to perform in case of under-explored policies or environments with possibly very large state spaces. The method is directly bound to how good/bad the world model is. Relation To Broader Scientific Literature: The paper builds directly on the growing body of work in model-based reinforcement learning, particularly approaches using latent world models such as DreamerV2, and extends these frameworks to address the underexplored problem of unsupervised novelty detection. While prior works like Dreamer and PlaNet focus on learning compact latent dynamics for control and planning, they do not consider the challenge of identifying distributional shifts during deployment. This paper shows that the KL divergence between the posterior and prior latent distributions—already computed internally by such models—can be leveraged as an effective and lightweight novelty signal. Importantly, unlike reconstruction- or prediction-based baselines, this method operates entirely in the latent space, making it more stable and closely aligned with the agent's internal belief state. Relative to traditional novelty detection methods such as autoencoders, latent prediction models, or uncertainty-based approaches like RIQN, the proposed method introduces a novel theoretical framework based on KL divergence bounds derived from priors conditioned on partial information (e.g., without memory or observation). These bounds are used to derive threshold-free detection criteria rooted in variational inference, inspired by dropout-based generalization tests. A key advantage of this approach is that it requires **no additional tuning or threshold hyperparameters**, making it both more interpretable and easier to deploy in practice. Essential References Not Discussed: None to the best of my knowledge. Other Strengths And Weaknesses: I don't have many concerns barring a suggestion to include some theoretical clarifications which I stated earlier as well, I think the paper is well written and evaluations are exhaustive. The need to not tune per environment makes the method extremely versatile. The reproducibility is as good as it can be. However, one important weakness I think is: the approach's results will be as good as the world model themselves, i.e., their applicability is limited in cases where interactions with the environment is expensive or potentially dangerous. Other Comments Or Suggestions: I think Appendix H is missing (or that is meant to be linked with appendix B, IRIS). I think the discussion on exploration should be included in the main paper and "Detection Score Trend Over Time" + Figure 2 can be shifted to appendix. An under-explored policy directly impacts the reliability method. Additionally, as future work, authors may want to incorporate uncertainty-aware priors to distinguish novelty from ignorance to tackle insufficient exploration (though I understand, that possibly means introducing changes to the world model, which is not the point of the paper.) Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time in working to improve our presentation and their encouraging feedback. We are delighted that you took the time to assist the quality of our work. In response, we outline provisional revisions below: * We agree that the discussion of C.2 may be more informative to the reader than the section *Detection Score Trend Over Time*, since the world model's final performance is directly tied to the exploration of the environment. Therefore, we have moved section C.2, Figure 7, and Table 8 into the main manuscript, and have moved *Detection Score Trend Over Time*, and Figure 2 into the appendix for those interested in visualizing the anomaly score trend. Section C.2 now appears after the section *alternative world models*. * We agree that it is important to make a note to the reader about the downsides of not using a proper metric such as the KL divergence, and the method's dependence on variation approximations. In response, we've extended the derivations section to discuss explicit details of the dependence on variational approximations as well as assumptions on the properties of the model itself. We state: “We briefly note that KL divergence is not a true metric. The bound in Proposition 3.2 compares the posterior to three different priors and provides a useful approximation for detection, rather than a strict mathematical guarantee. Formally, for distributions P, Q, and R, the KL divergence does not generally satisfy $D_{KL}(P \parallel R) \leq D_{KL}(P \parallel Q) + D_{KL}(Q \parallel R)$. This means Eq 4 might be violated not only due to novelty but potentially due to the inherent non-metric behavior of KL divergence, particularly in high-dimensional or complex/untrained latent spaces (see exploration effects)”. In addition, we also redirect the reader to this discussion as we note the reliance of functional approximators in the limitations section, which reads: *While our proposition using KL divergence has proven effective empirically, future work might explore alternative divergence measures with metric properties, such as the Jensen-Shannon divergence or Wasserstein distance, which could provide stronger theoretical guarantees. Additionally, exploring the relationship between latent space dimensionality and the reliability of variational approximations (of which our bound is based on see Appendix D) could yield insights into optimizing model architecture for novelty detection.* * We have summarized the contents of Appendix H in Appendix B to improve the self sufficiency of the paper, (which ties into C.2 Exploration effects) of which we discuss the exploration trick of weighing a single action heavily that IRIS incorporated to successfully earn a higher score (which could affect the performance of the world model). Specifically, we add to Appendix B: *Although our initial interpretation had moderate success, we expect that future work should be able to improve by adjusting the bound as well as addressing the exploration concerns given the exploration trick used in IRIS, of which the "up" action was stated to be heavily oversampled in favor of improving the agent performance, rather than improving the world model performance. Ultimately, we leave the task of enhancing novelty detection in the IRIS world model for future work.* --- Rebuttal Comment 1.1: Comment: Thank you for incorporating these changes. I am excited to read your new manuscript. I keep my score.
null
null
null
null
null
null
null
null
Fluctuations of the largest eigenvalues of transformed spiked Wigner matrices
Accept (poster)
Summary: This work investigates the asymptotic properties of the largest eigenvalue of entry-wise, non-linear transformations of spiked Wigner matrices. Its main result is to prove a BBP-like result: below a certain critical SNR (which is explicitly given in terms of the original SNR), the top eigenvalue has asymptotic Gaussian fluctuations, while below it it follows a Tracy-Widom law. The result is proven using an interpolation + resolvent comparison argument that requires proving a local law for this class of matrices, which is of independent interest. Numerical simulations are also given to illustrate the result. Claims And Evidence: Yes, all claims in this work are rigorous mathematical statements, and the proofs are provided. Methods And Evaluation Criteria: N/A. Theoretical Claims: I skimmed through the proof, and did not spot any issue. The arguments are standard, but require controlling a rank-2 spiked Wigner matrix. Experimental Designs Or Analyses: N/A. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The universality of entry-wise non-linear functions of spiked Wigner matrices is the topic of several previous works, since it has important consequences for statistical inference. Different flavours of this result exist in the literature, the one most relevant to this work being Perry et al., (2018), which this work build upon and is thoroughly acknowledged. However, some relevant important literature is missing - see below. Essential References Not Discussed: To my best knowledge, the first rigorous universality results in this direction appeared in (Krzakala et al., 2016; Dia et al., 2016; Lelarge & Miolane 2017), who proved the universality of the asymptotic mutual information with respect to the noise channel conjectured by Lesieur et al., (2015). These are key related reference from which (Perry et al., 2018) build upon. More recently, (Mergny et al. 2024) extended these universality results to cases in which the Fisher information is zero, requiring an analysis of different SNR regimes. On the related literature side, a recent result by Dandi et al., (2024) proved rigorous deterministic equivalents for non-linear spiked models coming from single gradient steps. - [Krzakala et al., 2016] Krzakala, Florent, Jiaming Xu, and Lenka Zdeborová. "Mutual information in rank-one matrix estimation." In 2016 IEEE Information Theory Workshop (ITW), pp. 71-75. IEEE, 2016. - [Dia et al., 2016] Dia, Mohamad, Nicolas Macris, Florent Krzakala, Thibault Lesieur, and Lenka Zdeborová. "Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula." Advances in Neural Information Processing Systems 29 (2016). - [Lelarge & Miolane 2017] Lelarge, Marc, and Léo Miolane. "Fundamental limits of symmetric low-rank matrix estimation." In Conference on Learning Theory, pp. 1297-1301. PMLR, 2017. - [Mergny et al. 2024] Mergny, Pierre, Justin Ko, Florent Krzakala, and Lenka Zdeborová. "Fundamental limits of non-linear low-rank matrix estimation." Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3873-3873, 2024. - [Dandi et al., 2024] Dandi, Yatin, Luca Pesce, Hugo Cui, Florent Krzakala, Yue M. Lu, and Bruno Loureiro. "A random matrix theory perspective on the spectrum of learned features and asymptotic generalization capabilities." arXiv preprint arXiv:2410.18938 (2024). Other Strengths And Weaknesses: Overall, the paper is well-written and easy to read, and the result is solid. However, I wonder about the choice of venue, since as a "pure" RMT paper it will only interest a small part of the ICML community. Maybe the authors could have put more effort into motivating how the spiked estimation problem pops up in ML, giving some more concrete examples and context. Otherwise, I think it is a good RMT paper. Other Comments Or Suggestions: - Page 1, below eq. (1), $H$ should read $W$. Questions For Authors: - In the first paragraph of "1.1. Contributions" and in eq. (5), should the effective SNR be $f'$ instead? Maybe I am missing something, but if this is correct, how do you reconcile with the result of (Lesieur et al., 2015) discussed in eq. (3)? - On the ML side, there has been a wave of recent interest in non-linear transformations of spiked Wigner/Wishart matrices, mostly motivated by the fact that, asymptotically, a single step of GD for a two-layer neural network trained with the square loss looks like a matrix in this ensemble. This characterization was used to prove the benefit of feature learning in several works (Ba et al. 2022; Moniri et al., 2023; Cui et al., 2024; Dandi et al., 2024). However, an important point in these works is that feature learning is only beneficial when the scaling of the spike (SNR here) is very large - i.e. scales with $N$. This regime is way above the BBP transition. Would the theorem here apply? If not, could the argument be generalized to cover this case? - What are the main technical challenges to extend the proof to the non-symmetric, spiked Wishart case? - How important is the assumption that the spike is dense $||x||_{2}^{2}=1$? The universality proofs for the mutual information of [Krzakala et al. 2016; Mergny et al. 2024] only require the support of the spike distribution to be finite. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive review and helpful feedback with which we can greatly improve our paper. Below, we address the concerns and questions, and also outline some important revisions we will make. **References**: Thank you for the references that we missed. We will include them in the revision. **Relevance to the ICML audience**: Thank you for the comment. In the revision, we will add and change a few paragraphs in the introduction of the current manuscript to demonstrate the relevance of our model to the ICML audience. The details can be found in our answer to Reviewer 1 (dwXH); I am sorry that we cannot write all the details in this box due to the length limit, **Other comments and questions**: (1) Thank you for pointing out the typos. We will correct them and other typos in the revision. Also, $f$ in the definition in $\lambda_e$ is a typo, which should be $f'$. (2) Thank you for your comment on the large SNR regime. If $\lambda_e = \omega(1)$, we believe that our result on the supercritical case still holds, i.e., $N^{1/2}(\mu_1 - \sqrt{\lambda_e} - 1/\sqrt{\lambda_e})$ converges to a centered Gaussian distribution with variance $2$. However, to prove it rigorously by adapting the strategy in the current manuscript, we need to find a proper target matrix for which the limiting distribution of the largest eigenvalue is proven, but unfortunately, to our best knowledge, it is not proven. Nevertheless, we would remark that the large ($\omega(1)$) SNR case should be relatively easier to analyze than the $O(1)$ SNR case and the argument in the current manuscript should be applicable to this case without many changes. We also would remark that one can consider the case where the SNR is large but $\lambda_e = 0$ since $\mathbb{E}[f_{12}'] = 0$. See our answer to Reviewer 1 (dwXH), item **Question on generalization**, for more detailed discussion on this case. (3) The main technical challenge for the non-symmetric or rectangular case is that the local law is much harder to prove. More specifically, we need to use a self-consistent equation for the resolvent entries analogous to the quadratic vector equation (21), but its analysis is significantly more complicated due to some technical issues. Furthermore, if we consider a multiplicative model considered in (Perry et al., 2018; Jung et al., 2021), then the transformed matrix cannot be well approximated by a spiked random matrix. This means that the strategy in the current manuscript should be changed accordingly. (4) If we let $y_i$ be i.i.d. with an $N$-independent distribution whose mean is $0$, variance is $1$, and all moments are finite, and set $x_i = y_i/\sqrt{N}$, then Assumption 2.3 is satisfied with overwhelming probability. The prior distribution (of $y_i$) does not need to be compactly supported, so our assumption on the spike is weaker than that in (Mergny et al. 2024) in this sense. Further, the prior distribution may have an atom at $0$, so the spike does not need to be 'dense' or 'flat'. One possible generalization of our result is to allow a `sparse' spike in the sense that $\max_i |x_i| > N^{-1/2 +\delta}$. With such a sparse spike, the fluctuation of the largest eigenvalue might exhibit another phase transition depending on $\delta > 0$. This is an interesting topic and we would like to consider it in the future. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebutal. My questions were clarified, and I will keep my score.
Summary: The authors study the largest eigenvalue of a spiked Wigner matrix model under elementwise function transformation. They show that the largest eigenvalue of the transformed matrix undergoes a phase transition when an effective SNR variable is tuned. At high SNR, the find that the largest eigenvalue is distributed as a gaussian around a value computable from the effective SNR. At low SNR, it is a Tracy-Widom distribution around 2. The authors establish this via proof and support their claims via numerics. Claims And Evidence: Yes, the claims are well supported Methods And Evaluation Criteria: Yes, the proposed methods make sense. Theoretical Claims: I checked the proof of the main theorems. Propositions were not checked. Experimental Designs Or Analyses: Yes, I checked the validity of both the numerical experiments. The first numerical experiment is not very clear to me. The f used here is defined in terms of the p and it's derivative. The authors should be more concrete about what this function explicitly looks like. Supplementary Material: No. Relation To Broader Scientific Literature: The paper basically shows what the statistical fluctuations of the largest eigenvalue looks like in the spiked Wigner model under element wise transformations. In the un-transformed case, similar results are known and have applications in community detection. This is a useful generalization of this as it allows us to rigorously boost the SNR in these problems. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See questions Other Comments Or Suggestions: 1.) After eq(1) there is a typo. H should be W 2.) In Eq(4) the bracket in f' is not closed 3.) In several points in the paper \lambda_e is defined with f instead of f'. This looks like a typo. If defined with f, the effective SNR will vanish by assumption. 4.) For the sake of completeness, could the authors briefly discuss what happens when \lambda_e = 1 5.) How are the assumptions 2.3 on x justified? It seems to say that the x vector must be fairly flat. For the community detection case where x_i are proportional to {-1,1} what do these assumptions mean? Does it translate to a statement about the size of the communities? 6.) Do the techniques used here tell us something about the eigenvector corresponding to the largest eigenvalue? For PCA type of applications this is important to know. Will this eigenvector have a large overlap with the spike? Questions For Authors: N/A (See previous) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive review and helpful feedback with which we can greatly improve our paper. Below, we address the concerns and questions, and also outline some important revisions we will make. **Experimental design**: Thank you for your comment. We will include the graphs of the density $p$ and the transformation $f$ in the revision. (The precise formulas for $p$ and $f$ can be found in Appendix A.1.) See also the following links for the graphs of the [density](https://www.wolframalpha.com/input?i=Draw+%282.5%2F%5Csqrt%7B2%5Cpi%7D%29+*%5Be%5E%7B-%285%2F2%29%28x-2%2Fsqrt%7B5%7D%29%5E2%7D+%2B+e%5E%7B-%285%2F2%29%28x%2B2%2Fsqrt%7B5%7D%29%5E2%7D%5D) and the [transformation](https://www.wolframalpha.com/input?i=Draw+2.62503*%5B%28x-2%2F%5Csqrt%7B5%7D%29+e%5E%7B-%285%2F2%29%28x-2%2Fsqrt%7B5%7D%29%5E2%7D+%2B+%28x%2B2%2F%5Csqrt%7B5%7D%29+e%5E%7B-%285%2F2%29%28x%2B2%2Fsqrt%7B5%7D%29%5E2%7D%5D%2F%5B+e%5E%7B-%285%2F2%29%28x-2%2Fsqrt%7B5%7D%29%5E2%7D+%2B++e%5E%7B-%285%2F2%29%28x%2B2%2Fsqrt%7B5%7D%29%5E2%7D%5D). **Other comments and questions**: (1)-(3) Thank you for pointing out the typos. We will correct them and other typos in the revision. Also, $f$ in the definition in $\lambda_e$ is a typo, which should be $f'$. (4) If $\lambda_e = 1$, then the largest eigenvalue $\mu_1 (\widetilde{M})$ converges to $2$, but the fluctuation $N^{2/3}( \mu_1 (\widetilde{M}) - 2)$ converges to a distribution different from the GOE Tracy-Widom distribution. To our best knowledge, the limiting distribution involves the stochastic Airy operator and does not have simple formula. We refer to Theorems 1.5 and 1.7 in (Bloemendal and Virag, 2013) for more detail. (5) If we let $y_i$ be i.i.d. with an $N$-independent distribution whose mean is $0$, variance is $1$, and all moments are finite, and set $x_i = y_i/\sqrt{N}$, then Assumption 2.3 is satisfied with overwhelming probability. (This can be proved by concentration inequalities such as a high-order Markov inequality.) Our assumption on the spike is not much different from the assumptions in other related papers, e.g., (Guionnet et al., 2023; Feldman, 2023). Further, the prior distribution may have an atom at $0$, so the spike does not need to be 'dense' or 'flat'. If we consider the community detection case, Assumption 2.3 is satisfied if $|\{ x_i = 1/\sqrt{N} \}| - |\{ x_i = -1/\sqrt{N} \}| = O(N^{1/2 + \epsilon})$, which implies that the model is (almost) balanced. (6) If we let $\boldsymbol v$ be the eigenvector corresponding to the largest eigenvalue, it is known that $\langle \boldsymbol v, \boldsymbol x^2 \rangle \to (1 -1/\lambda_e)$ if $\lambda_e > 1$, and $\langle \boldsymbol v, \boldsymbol x^2 \rangle \to 0$ otherwise. (See Corollary 2.2 in (Guionnet et al., 2023).)
Summary: This paper studies the fluctuations of the largest eigenvalues in transformed spiked Wigner matrices and discuss the Baik–Ben Arous–Péché (BBP)-type phase transition arisiring in these problem. While a great deal is known about these matrix, the contribution is the analysis of the asymptotic fluctuation of the leading eigenvalue, proving that it follows a Gaussian distribution in the supercritical regime (when the effective signal-to-noise ratio is above a critical threshold) and the Tracy–Widom GOE distribution in the subcritical regime.Their approach relies on precise concentration estimates, interpolation arguments, and Green function techniques to track the eigenvalue fluctuations. Claims And Evidence: The claims made in the paper are well-supported by rigorous mathematical proofs. I have checked --not in detail-- the theoretical derivations and found them correct. Methods And Evaluation Criteria: Since this is a theoretical work, there are no experimental design concerns. The supplementary material, which contains additional proofs, was reviewed. Theoretical Claims: The methodology follows standard theoretical techniques in random matrix theory, and the supplementary material provides complete proofs of the results. Experimental Designs Or Analyses: N/A Supplementary Material: The supplementary material, which contains additional proofs, has been briefly reviewed. It follows standard technique Relation To Broader Scientific Literature: The paper builds on the well-established literature on spiked random matrix models and eigenvalue phase transitions, particularly the Baik–Ben Arous–Péché (BBP) transition. It extends results from classical spiked Wigner matrix theory by considering entrywise transformations and their impact on spectral properties, a direction that has been explored in works such as Lesieur et al. (2015), Perry et al. (2018). The main novelty here is the rigorous characterization of eigenvalue fluctuations under such transformations, demonstrating that the largest eigenvalue follows a Gaussian or Tracy–Widom GOE distribution depending on the effective signal strength. This aligns with recent advances in random matrix theory that analyze the stability and universality of spectral methods under transformations, as seen in both theoretical works and applications in statistical inference and machine learning. While the paper is mathematically strong, its direct connection to broader ML applications is somewhat limited. Similar ideas have been used in theoretical analyses of feature learning in neural networks and optimization landscapes, suggesting a potential impact beyond pure RMT. The author discusses some of these aides, but making this connection more explicit could strengthen its relevance to the ICML audience? Essential References Not Discussed: The literature review in this paper is well done, and I do not have much to add. The only minor but possible suggestion is related to a question bellow is related to arXiv:2403.04234, "Fundamental Limits of Non-Linear Low-Rank Matrix Estimation." Other Strengths And Weaknesses: The paper is solid and presents a rigorous mathematical analysis of eigenvalue fluctuations in transformed spiked Wigner matrices. However, it leans heavily towards mathematical random matrix theory (RMT) rather than machine learning, which raises the question of its relevance to the ICML audience. While the results are interesting and technically sound, their direct impact on practical ML applications remains somewhat unclear. The theoretical insights are valuable for understanding spectral methods, but their significance for ICML could be better motivated. Other Comments Or Suggestions: See bellow Questions For Authors: My main question concerns a possible generalization. In works such as Guionnet et al. (2023) and Feldman (2023), a slightly different scaling is considered, which is further explored in detail in arXiv:2403.04234. In this generalization, the transformation starts with a signal-to-noise ratio scaling as N^{1-1/k_f) , but still ends up with a BBP-type transition for the trasnformed spectral method. I am curious whether the authors' approach could also yield Tracy–Widom-type fluctuations in this setting. If so, this would significantly enhance the impact and applicability of the results. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive review and helpful feedback with which we can greatly improve our paper. Below, we address the concerns and questions, and also outline some important revisions we will make. **Relevance to the ICML audience**: Thank you for the comment. In the revision, we will incorporate the paragraphs below into the introduction of the current manuscript to demonstrate the relevance of our model to the ICML audience. Spiked random matrix models play a crucial role in analyzing for many machine learning problems. For instance, in the study of the feature learning in a two-layer neural network, a rank-$1$ perturbation of the initial weight matrix can approximate the updated weight after one gradient step (Ba et al. 2022). This idea is further developed by (Cui et al. 2024; Dandi et. al., 2024), where the spiked Random Features model was considered with first layer weights of the form $X + r \boldsymbol{u}\boldsymbol{v}^T$. Additionally, these models have practical applications in machine learning algorithms, such as an ensemble method for Q-learning in reinforcement learning (Lee et al, 2023). The spectral properties of spiked random matrix models are important in analyzing various problems in machine learning and statistics, as spectral methods can improve convergence analysis and provides theoretical guarantees. Notable examples include the community detection (Abbe, 2017) and submatrix localization (Butucea et al., 2013). In the context of machine, we refer to (Mondelli and Venkataramanan, 2021) for an application to the approximate message passing algorithm and (Chi et al., 2019) to non-convex optimization. Entrywise transformed random matrix models provide valuable insights into the study of fundamental limits of detection for the signal from a spiked random matrix, which has been extensively studied by analyzing the eigenvalues (Montanari et al., 2015; Johnstone and Onatski, 2015) or the mutual information and minimum mean squared error (Lesieur et al., 2015; Krzakala et al., 2016; Dia et al., 2016; Lelarge and Miolane, 2017). Further, these models contribute to the theoretical analysis of deep neural network, where entrywise transformations correspond to pointwise nonlinear activation functions (Pennington and Worah, 2017). **References**: Thank you for the reference of (Mergny et al., 2024) that we missed. We will include it in the revision. **Question on generalization**: Thank you for the good question. It is also possible to prove similar results with different SNR scaling. In our setting, the index $k_f$ can be defined as $k_f := \inf \\{ k \in \mathbb{Z}^+ : \mathbb{E}[f_{12}^{(k)}] \neq 0 \\}$. Under a mild assumption on $\boldsymbol x$ (e.g., $\sqrt{N} x_i$'s are i.i.d. with an $N$-independent distribution whose mean is $0$, variance is $1$, and all moments are finite), the critical scaling of the SNR is $\lambda \equiv \lambda(N) = \lambda_0 N^{1-1/k_f}$ and the entries of the spiked Wigner matrix are given by $M_{ij} = W_{ij} + \sqrt{\lambda_0} N^{\frac{1}{2}(1-\frac{1}{k_f})} x_i x_j$. Let us consider the case $k_f = 2$ for simplicity. The Taylor expansion in (3) is then $$N^{-\frac{1}{2}} f(\sqrt{N}M_{ij}) \approx \frac{f_{ij}}{\sqrt N} + \sqrt{\lambda_0} N^{1/4} f_{ij}' x_i x_j + \sqrt{\lambda_0} N (f_{ij}''- \mathbb{E}[f_{ij}'']) x_i^2 x_j^2 + \sqrt{\lambda_0} N \mathbb{E}[f_{ij}''] x_i^2 x_j^2.$$ This suggests that the effective SNR $\lambda_e = \lambda_0 \mathbb{E}[f_{12}]^2 \| {\boldsymbol x}^2 \|^4$. To prove the BBP transition, we can proceed as in the current paper with several changes. The most significant change is that the noise matrix $V$ in (14) is now given by $$V_{ij} = \frac{f_{ij}}{\sqrt N} + \sqrt{\lambda_0} N^{1/4} f_{ij}' x_i x_j + \sqrt{\lambda_0} N (f_{ij}''- \mathbb{E}[f_{ij}'']) x_i^2 x_j^2.$$ Due to the change of the noise matrix $V$, we need to adjust several parts of the current paper. The most notable ones are as follows: (1) The (row) sum of the variances, $\sum_j \mathbb{E}[(V_{ij})^2] = 1+ \Theta(N^{-1/2})$ (which was $1+ \Theta(N^{-1})$). This means that all eigenvalues are rescaled by a (deterministic) factor $1+ \Theta(N^{-1/2})$, or equivalently, all eigenvalues are shifted by $\Theta(N^{-1/2})$. Since the shift is not negligible when compared to the typical size of the fluctuations ($\Theta(N^{-1/2})$ or $\Theta(N^{-2/3})$), our main results should be changed accordingly. (2) The proof of the local law for $V(t)$, Lemma B.3, and other proofs based on the local law and the resolvent identity, will become more involved. We need to use the bootstrap argument more times, where we first prove a (weak) estimate first and improve it by using the weak estimate itself. In the revision, we will incorporate the discussion above into the manuscript and add detailed proofs to the appendix.
null
null
null
null
null
null
null
null
OWLS: Scaling Laws for Multilingual Speech Recognition and Translation Models
Accept (poster)
Summary: This paper introduces **OWLS, open-source Whisper-style models for multilingual speech recognition (ASR) and translation tasks**, and releases the trained models. The authors empirically derive scaling laws for multilingual speech processing by training OWLS models at varying scales. Experimental results demonstrate that the large OWLS models follow the scaling laws across different languages and tasks and exhibit the emergent abilities like orthographic understanding or code-switching. ### Update after rebuttal After going through all the reviews from other reviewers as well as author responses, I'm still concerned with the significance of the paper's findings which are mostly expected. But I like the paper's contribution in scaling ASR models, since there have not been such works that show extensive analyses as this study. Thus, I maintain my score as weak accept. Claims And Evidence: The paper provides reasonable evidences supporting its claim regarding the difficulties in fitting universal scaling laws for language-agnostic data, which is interesting. However, I'm not quite convinced with the claim related to *ICL ability*. Whisper-style models like OWLS are trained primarily for ASR and translation, not instruction-following. These models do not inherently possess instruction-following capabilities; therefore, it is unreasonable to expect ICL behaviors from them. Also, it is pretty obvious to discover the scaling laws with language-specific model size and scaling data with diversity. Methods And Evaluation Criteria: The OWLS training corpus is a simple combination of two existing datasets (OWSM and YODAS), which is not special. However, dividing the scaling analysis clearly into model size, data size, and compute budget is an effective approach. One limitation is that the method lacks detailed justification for scaling the model specifically up to 18B parameters. While experimenting with large-scale models is valuable, the reasoning behind architectural choices, training configurations, or design principles at this scale is insufficiently elaborated. It remains unclear if the chosen design is optimal in scaling. Theoretical Claims: Not provided. Experimental Designs Or Analyses: The experimental design is well-structured, providing comprehensive analyses. Particularly noteworthy is the thoughtful experimental setup aimed at investigating emergent abilities. Supplementary Material: I have reviewed the appended code but haven't run it. Relation To Broader Scientific Literature: The paper relates to existing literature on ASR and AST models, and more broadly, to LLM scalability. To my knowledge, there hasn't been previous works on comprehensive scaling-law analyses at this large model scale, which is a meaningful contribution to this literature. Essential References Not Discussed: None Other Strengths And Weaknesses: A notable strength is the comprehensive scaling-law analysis, particularly as this appears to be among the first efforts to investigate large-scale models (up to 18B) specifically for ASR and AST tasks. The experimental analyses—especially the emergent abilities—are well set-up and insightful. However, a clear weakness is the minimal methodological novelty, given the dataset merely combines existing corpora and there is not enough grounding on the scalable model design. Moreover, most findings (i.e., better performance as the model, data, or compute scales) are obvious and not significantly groundbreaking. Other Comments Or Suggestions: Every figures contain excessively small font sizes, making them difficult to read. Additionally, distinguishing elements solely by color is difficult; varying shapes or increasing font sizes is strongly recommended. Questions For Authors: 1. The authors noted training using timestamp predictions—are these timestamps available for all datasets? If not, how was missing timestamp information handled? 2. Are there any results of training the encoder other than Whisper-style encoders? 3. In Sec. 5.1, the authors investigated scaling with a fixed computational budget. I'm curious if the authors have also explored model performance and scaling trends when varying beam sizes on large models without budget constraints? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank your for your insights and comments. > I'm not quite convinced with the claim related to ICL ability. Whisper-style models like OWLS are trained primarily for ASR and translation, not instruction-following... While Whisper-style models are not trained for instruction-following, instruction-tuning is not a requirement for models to have the capability to perform ICL, although it will certainly improve their ICL capabilities by minimizing the distribution shift between prompts and pre-training data [1]. Early LLMs, such as GPT2 and GPT3 [2,3], were not instruction-tuned and were only trained on pure language modeling. ICL can be described as a byproduct of auto-regressive modeling, which is how ASR/ST are formulated in auto-regressive encoder-decoder frameworks such as Whisper. ICL is thus a reasonable expectation for the OWLS models because they may be able to use in-context examples (speech-text pairs) as a prior for mapping sounds to words. Examples we provide in our "Strengths And Weaknesses" response to Reviewer ogkW illustrate this capability. [4] also shows preliminary results for this, although we believe that their weaker results are due to the much smaller model size of the tested models. > One limitation is that the method lacks detailed justification for scaling the model specifically up to 18B parameters... We chose the upper of 18B primarily due to budget constraints. At 18B, we are still able to perform training without sharding the model parameters across GPUs, which is why we are able to maintain such high training efficiency (training time of 17 days). If we performed model sharding, the expected training time would have been ~2 months, which was unfeasible. Additionally, we were still able to maintain the same mini batch size at 18B without performing gradient accumulation at our compute budget, as gradient accumulation would have also slowed down training significantly. We want to maintain identical batch sizes to minimize experimental variance from the hyper-parameters. In terms of the model architecture, we equally allocate parameters between the encoder and decoder for two main reasons: - It is the approach adopted by current SOTA models such as Whisper - It was found to be the optimal setting in encoder-decoder scaling for NLP [5,6] Finally, we chose the Transformer architecture due to its ease of scaling. We experimented with other architectures, such as Conformer, but found it was extremely difficult to tune the hyper-parameters and get the model to converge. This is a similar finding to that of Zhang et al., when they trained Google USM [7]. Other works [8] has shown that the advantages of Conformer diminish as the model size grows. Overall, we believe that our design choices are indeed optimal when considering both effectiveness and efficiency. We will add all of this information in the final draft. > it is pretty obvious to discover the scaling laws with language-specific model size and scaling data with diversity > However, a clear weakness is the minimal methodological novelty... Moreover, most findings (i.e., better performance as the model, data, or compute scales) are obvious... We respectfully argue that many insights of the paper are not predictable. While the notion that a larger model/data/compute leads to better performance is indeed obvious, our results showing that the exact performance improvements w.r.t downstream metrics like WER can be modeled as a scaling law is a novel insight. Many previous scaling law papers model only show scaling laws w.r.t test cross-entropy loss, which is a more predictable but less actionable result for practitioners and researchers. We also want to emphasize that the degree to which performance improves at scale is also non-obvious. For example, En to X ST BLEU increases by a very large factor of 21.8% on average when scaling from 2B (already larger than most SOTA speech models like Whisper Large (1.5B) and Seamless (1.2B)) to 18B parameters, showing that there is much to gain at scale. > Every figures contain excessively small font sizes, making them difficult to read.. We apologize for the inconvenience. We will make sure to rectify this in future editions. [1] An Explanation of In-context Learning as Implicit Bayesian Inference (Tie et al., ICLR 2022) [2] Language Models are Unsupervised Multitask Learners (Radford et al., 2019) [3] Language Models are Few-Shot Learners (Brown et al., 2020) [4] Can Whisper perform speech-based in-context learning? (Wang et al., ICASSP 2024) [5] Scaling Laws for Neural Machine Translation (Ghorbani et al., ICLR 2022) [6] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (Raffel et al., 2019) [7] Hearing the AGI: from GMM-HMM to GPT-4o. (Yu Zhang, 2024). https://youtu.be/pRUrO0x637A. [8] Revisiting Convolution-free Transformer for Speech Recognition (Hou et al., Interspeech 2024) --- Rebuttal Comment 1.1: Comment: Thanks for the clarification regarding my concerns. The paper exhibits various ASR & AST analyses in terms of scaling laws, which is good. However, my concern still lies at the significance of the paper's findings. > Many previous scaling law papers model only show scaling laws w.r.t test cross-entropy loss, which is a more predictable but less actionable result for practitioners and researchers. I don't see much difference btw cross-entropy loss and WER, which I assume are pretty much correlated. Larger model/data/compute naturally leads to lower cross-entropy loss and lower WER, simultaneously, by a scaling law. I cannot find very surprising results drawn from this. I believe other reviewers share the same concern with mine. Still, I like the paper since it's flawless, sound, and giving insights in scaling ASR models. I suggest the authors put more efforts on experimental details and emphasizing the emergent abilities of OWLS. Also, please add the justifications of the model design choices in the future revision.
Summary: This paper investigates the effect of model size and dataset size on multilingual Automatic Speech Recognition (ASR) and Speech Translation (ST) tasks for 150 languages. The model sizes vary from 0.25B to 18B parameters. The WER vs. size curves are fitted into power law functions and the correlations are reported in terms of R^2 value. In addition, the paper test the trained models on unseen tasks or languages to show the emerging capabilities of these large multilingual models. Experiments are performed on up to 360k hours of speech across 150 languages. General trends are intuitive in that larger model sizes and larger data sizes lead to lower WER and higher BLEU scores. ## Update after rebuttal The paper will benefit from some revision to clarify some of the points mentioned in the rebuttal comment. I would like to keep my initial score. Claims And Evidence: Claims that are somewhat arguable: 1) Claim: "Larger models can mitigate bias and improve the fairness of speech technologies.". Argument: This is a very generic statement that may or may not be true depending on how you define fairness. In this context, the authors might have meant language coverage and fairness due to have reasonably well ASR model for many languages. However, this is a very narrow definition. There could be multiple dimensions in which the fairness of ASR models can be analyzed and only the analyses in this paper are not sufficient to make such a big fairness claim. 2) I could not quite follow how Section 4.3. tries to estimate the final checkpoint WER from the initial checkpoint WER, and also how this is reflected in Fig. 9. Hence, I could not check the validity of the following: "one can reasonably predict the final WER of the model given the WERs of initial checkpoints." 3) Code-switching Fig. 11. looks interesting, it seems that there are two clusters of languages, low CER ones with a clearer pattern and high CER languages with non-uniform patterns over model size. In the text, some of the difference is attributed to the character set of these languages En - PT being Latin-only versus En- ZH using different characters. However, the results seem a little confusing. For example, why En - AR or En - RU does not suffer from a similar problem? Methods And Evaluation Criteria: Yes Theoretical Claims: NA Experimental Designs Or Analyses: Experiments look valid. Supplementary Material: Appendix F, which could have shared more details of the emebedding extraction for sample mining for Quechua data. Relation To Broader Scientific Literature: The paper investigates large and extra large ASR and ST models in the multilingual scenario. Especially, the differences from 1B to 9B or 18B model that we have not empirically seen in previous studies can lead to some additional research questions for serving ASR model to low-resourced languages. Once the source code and the models are released, this OWLS suite can be used to setup baselines for other studies easily which would be valuable. Essential References Not Discussed: NA Other Strengths And Weaknesses: - Strength: OWLS can be useful for both ASR and ST community with the analysis it provides. Once the code and the setup is available, it might be helpful for researchers, too. - Weaknesses: Some arguments are over-generalized, but they do not always hold. For example, - Fig. 4, was there an experimental issue in the AMI (or Tedlium, cannot quite tell from the color) ASR runs? - The generic fairness claim (discussed above) - In-context learning is happening to some extent but the given example may not be sufficient to claim this ability. Other Comments Or Suggestions: 1) Fig. 3 and Fig. 5, did you mean the Tamil language when the figure says Tami? 2) What is PN-CER in Fig. 10 caption? Is it N-CER which was mentioned in the text? Questions For Authors: 1) Could you please explain why WER/CER is increasing with increasing data after a certain point in Fig. 5? 2) Could you please rephrase the fairness claim in Section 4.1. Multilingual ASR? 3) Fig. 4., could you please explain why the topmost curve is fluctuating? Also in this figure, it is hard to tell if it AMI or Tedlium or Gigaspeech (the legend font size makes it difficult). 4) How does Section 4.3. estimate the final checkpoint WER from the initial checkpoint WER? How this is reflected in Fig. 9? 5) Code-switching Fig. 11. looks interesting, it seems that there are two clusters of languages, low CER ones with a clearer pattern and high CER languages with non-uniform patterns over model size. In the text, some of the difference is attributed to the character set of these languages En - PT being Latin-only versus En- ZH using different characters. However, the results seem a little confusing. For example, why En - AR or En - RU does not suffer from a similar problem? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank your for your insights and comments. We organize our response by section: ## Supplementary Material We believe that we have all the possible information for the embedding extraction process in the appendix, although we acknowledge that the description may be unclear and hard to follow through text alone. For further clarity, we will add the following figures to the manuscript to better visualize the embedding extraction method: https://anonymous.4open.science/r/owls-samples-13A3/icl_figure.md . The code to perform ICL will also be released for reproducibility. ## Claims And Evidence: 1. We understand the reviewer’s concern. While our definition was indeed focusing on the fact that scaling leads to a better model for many languages, we do acknowledge that this definition may be too narrow. We will instead rephrase the fairness claim to “Scaling to larger models can improve ASR performance across the board, in both low and high resource languages, with significant improvements in the former category.” 2. We perform inference with the model after 15K, 30K, 60K, 120K, 240K, 480K training steps, and use these datapoints to fit a power-law function. We then extrapolate the power law to 675K steps, which is where the final checkpoint is taken from, and compare it to the true WER of the final checkpoint. In Fig 9, each datapoint corresponds to each checkpoint that we evaluated (note that some early checkpoints are not visible since we cut off the chart at 100 WER). The training FLOPS of each checkpoint is then calculated by profiling each model size for a few steps and scaling the result to 15K-675K steps. We will adjust the manuscript to make these details more clear. 3. We also arrived at the conclusion that the clusters are largely distinguished by character set. However, we believe that linguistic similarity likely also plays a role. Russian, despite using Cyrillic letters, is more linguistically similar to English and the bottom cluster languages (which are all European), than the top cluster languages. We believe there may be some miscommunication for Arabic due to our presentation format. Arabic is the red line in the middle of the top cluster and certainly suffers from higher CERs. We will update the chart with different shapes for each language to make it less confusing. ## Other Strengths And Weaknesses > Fig. 4, was there an experimental issue in the AMI (or Tedlium, cannot quite tell from the color) ASR runs? We addressed below in our response to Question 3. > In-context learning is happening to some extent but the given example may not be sufficient to claim this ability. We have included a few samples on example ICL generations https://anonymous.4open.science/r/owls-samples-C53F/README.md , which should better illustrate the improvements in ASR output as the number of in-context examples increase. The most obvious change is from pure zero-shot (k=0) to on-shot ICL (k=1), where the model changes from outputting a completely wrong language / character set to text that looks like Quechua. With more examples, the models also start to learn more complex phone-to-orthographic mapping ("ai" -> "ay") and white space placement ("ruraikunatukunapa -> ruraikuna tukunapa"). ## Other Comments Or Suggestions 1. Yes for both. Apologies for the mistake. We will fix this typo. 2. Yes, the caption should also read PN-CER. We will fix this typo. ## Questions For Authors 1. We believe that the large increases in WER/CER are due to two main factors: 1) domain mismatches between the additional YouTube data and FLEURS evaluation data and 2) interference from similar languages/dialects (ie. Chinese and Cantonese). 2. Addressed above in "Claims and Evidence" point 1. 3. The fluctuation is in AMI, where the data is largely spontaneous and conversational, as the audio is sourced from business meetings. Since our training data is largely read speech and multilingual, it can be expected that there will be some small fluctuations on individual test sets despite average performance improving with scale. We will make the figure easier to read by using different shapes for the points of each dataset. 4. Addressed above in "Claims and Evidence" point 2. 5. Addressed above in "Claims and Evidence" point 3.
Summary: The paper introduces OWLS, a suite of multilingual speech recognition and translation models ranging from 0.25B to 18B parameters. It systematically studies scaling laws for speech tasks, demonstrating how model, data, and compute scaling impact performance. The paper claims that larger models improve low-resource language performance and exhibit emergent abilities, and it presents empirical scaling laws derived from experiments. ## update after rebuttal The authors’ response has addressed several of my initial concerns; however, it does not substantially change my overall evaluation of the work. The paper primarily focuses on scaling a standard ASR model rather than introducing novel methodological contributions or architectural innovations. While I acknowledge the significance of the presented experiments to the ASR community, the results neither revealed surprising findings nor provided sufficient new insights. I was particularly hoping for a more in-depth and reflective discussion on how the Audio-LLM paradigm contrasts with traditional ASR approaches—an aspect that remains unaddressed in the current version. After reviewing comments from other reviewers, I noted that similar concerns were shared. As such, I will maintain my original evaluation. Claims And Evidence: The claims about scaling benefits (especially for low-resource languages) are well-supported by experimental results. The study effectively demonstrates the impact of scaling on WER and BLEU scores across different settings. Weakness: While the paper suggests that emergent abilities arise with larger models, some results (e.g., code-switching improvements) appear inconsistent across languages. More justification is needed. Methods And Evaluation Criteria: The methodology is well-structured, following systematic scaling experiments. The use of publicly available data enhances reproducibility, but some details about data preprocessing are missing. Theoretical Claims: The paper effectively extends scaling laws to speech tasks, building on prior work in text and vision. The derivation of scaling laws is reasonable, but it would benefit from a more formal theoretical justification. Some empirical observations (e.g., emergent abilities) could be further contextualized within existing theories. Experimental Designs Or Analyses: The experiments are well-controlled, and the choice of different model sizes is appropriate. The scaling trends in WER and BLEU are insightful, but the role of domain mismatch in dataset expansion (e.g., adding YODAS data) should be further explored. The analysis of emergent abilities is interesting but would be stronger with qualitative examples. Supplementary Material: The appendix provides useful details, especially on training settings and dataset composition. More details on hyperparameter selection and convergence criteria would be helpful. The code is provided in the Supplementary Material. Relation To Broader Scientific Literature: The paper situates itself well within prior scaling laws research in NLP and speech. Essential References Not Discussed: The paper does not discuss some recent multilingual speech models (e.g., SeamlessM4T and MMS from Meta). Other Strengths And Weaknesses: Strengths: * The open-source approach enhances reproducibility. * The paper provides a comprehensive scaling analysis for speech models. * Strong empirical results demonstrate the model’s efficiency. Weaknesses: My primary concern is that the performance of this work appears relatively uncompetitive compared to recent advancements in Audio-LLMs, such as Qwen2-Audio. While this paper primarily focuses on Automatic Speech Recognition (ASR) and Speech Translation (ST), Audio-LLMs support a broader range of tasks and offer wider application scenarios. I acknowledge that a notable strength of this study lies in its efficiency, achieved through hybrid CTC/Attention training. However, it remains uncertain whether this approach represents the most optimal solution by current standards. Additionally, the paper does not provide a direct comparison with other efficiency-focused models, such as SenseVoice, which raises concerns about the completeness of the evaluation. Therefore, despite the extensive experiments conducted, the paper lacks a compelling demonstration of its results, which limits its overall impact. Other Comments Or Suggestions: Correct the legend in Figure 1: The blue color in the legend is incorrectly assigned to Whisper, whereas the caption states that it represents OWLS. Ensure that the legend accurately reflects the intended model categorization to avoid confusion. Questions For Authors: 1. How does OWLS compare to existing large-scale models like SeamlessM4T and SenseVoice in terms of efficiency? 2. Have you tested the generalization of OWLS on out-of-domain data beyond the current benchmarks? 3. How does the training cost of OWLS compare to other state-of-the-art multilingual speech models? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the review and insights. > some details about data preprocessing are missing Reviewer uTgG raised similar concerns. Please refer to our response to their question 3. > code-switching improvements appear inconsistent across languages. Since we are performing multilingual multi-domain training and evaluation, some per-language fluctuation is expected. However, the average code-switching CER clearly decreases with increases in model size. We will better illustrate this by including a line for the average performance, along with a table with raw scores in the appendix. > the role of domain mismatch in dataset expansion should be further explored While we agree that the domain mismatch should be further explored (and plan to do so in future work), we believe that further experiments are outside this paper's scope, which is focused on multilingual neural scaling rather than studying the role of data domains. The domain shift finding is a consequence of doing such large scaling analyses, rather than a hypothesis we explicitly designed an experiment to test. > The analysis of emergent abilities would be stronger with qualitative examples. Table 9 includes example mondegreen generations. We have uploaded the accompanying audio for reference in https://anonymous.4open.science/w/owls-samples-88B2/ . We also added sample generations for ICL (Table 5): https://anonymous.4open.science/r/owls-samples-C53F/README.md . We will include these in the final manuscript. > More details on hyperparameter selection and convergence criteria We use the same hyper-parameters as OWSM v3.1, which generalized to all model sizes. We did not have explicit convergence criteria, but instead cut off training after 675K steps for fairness and budgeting. All models appear to have converged by this point in both validation and training loss. We will include convergence plots in the final draft. > does not discuss some recent multilingual speech models (SeamlessM4T and MMS) We reference Seamless in 2.3 and discuss how they differ from Whisper-style models. We also reference MMS at a high-level in 2.3, but will update the draft to make more direct comparisons. WER comparisons with Seamless are shown in the able below. > performance appears relatively uncompetitive compared to Audio-LLMs, such as Qwen2Audio...does not provide a direct comparison with other efficiency-focused models, such as SenseVoice Below are comparisons with these models for 3 languages using FLEURS and non-FLEURS test sets. Chinese/Japanese use CER, English uses WER. We use the official reported scores when possible, designated by a *, otherwise we run inference ourselves. Since SenseVoice-L is not publicly available, we only include their official results. |Model| FLEURS zh | FLEURS en | FLEURS ja | AISHELL 1 zh | test-clean en | Reazon ja | |-|-|-|-|-|-|-| |OWLS 1B|13.8 | 9.7 | 9.2 | 6.2 | 2.3 | 7.8 |OWLS 9B|11.6 | 8.5 | 7.7 | 4.8 | 1.9 | 7.3 |OWLS 18B|10.6 | 7.7 | 7.2 | 4.8 | 2.0 | 7.5 |OWLS 18B v2|10.1 | 6.8 | 6.7 | 4.8 | 2 | 7.2 |Qwen2Audio|7.5*|9.4| 20.1 | 8.7 | 1.6* | 50.0 |SenseVoice S|9.6|10.3| 13.1 | 3.0* | 3.2 *| 37.1 |SenseVoice L||||2.1*|2.6* |Seamless Medium|15.7*|8.3*|15.9*|9.6|4.2|34.9 |Seamless Large| 17.0*|7.3*|17.6*|8.7|3.7|36.6 OWLS models are competitive with Qwen2Audio: OWLS 18B v2 outperforms QwenAudio2 on 4/6 test sets. We note that we were also unable to reproduce the results in the Qwen2Audio paper, since they did not release their prompts (we obtained a CER of 14.2 on FLEURS zh with Qwen2Audio, while official is 7.5 CER), further emphasizing the importance of the open nature of the OWLS suite. We found that Qwen2Audio was more susceptible to hallucinations compared to the other models, which is in line with previous studies on LLMs for ASR [1]. ## Questions 1. OWLS is similar to Seamless and SenseVoice-L in terms of algorithmic efficiency, since they're all encoder-decoders. 2. All benchmarks we use are in the manuscript. Code-switching and Quechua are out-of-domain. 3. While it is difficult to compare the training cost to all SOTA models due to the proprietary nature of training details, the below table shows a comparison for the models where the information is available: | Model | Parameters | GPUs | Training Hours | |-|-|-|-| |Canary|1B| 128xA100 80GB |48 |OWSM v3 |900M| 64xA100 40GB |240 |OWSM v3.1|1B| 64xA100 40GB |384 |OWSM-CTC |1B| 64xA100 40GB|300 |OWLS |1B|24xH100 80GB |116 |OWLS |18B| 48xH100 80GB |405 Note that training hours are not fully comparable due to differences in GPU and cluster environment. Canary also uses a pre-trained speech encoder to decrease the training time, which is not considered in the final budget. Nevertheless, we believe the OWLS training is well-optimized, as we had a limited compute budget. All of these optimizations will be open-sourced. [1] Performance evaluation of SLAM-ASR: The Good, the Bad, the Ugly, and the Way Forward. (Kumar et al., ICASSP 2025 SALMA Workshop).
Summary: This paper investigates scaling laws for multilingual, multi-task speech-to-text models. To achieve this, the authors introduce OWLS, a collection of ASR/ST models ranging from 0.25B to 18B parameters, with the 18B model being the largest publicly known ASR/ST model to date. The study examines three dimensions of scaling, model size, data size, and compute budget, to understand how each factor influences performance. Additionally, the paper highlights emergent abilities that only appear when the model reaches a certain scale. The OWLS project is fully open-sourced, with the training code, logs, and checkpoints set to be released to facilitate reproducibility and further research. Claims And Evidence: This paper makes several key claims: 1. Scaling Laws: The most significant claim is that the performance of multilingual ASR/ST models follows a (language-specific) scaling law. This is well-supported by Figures 3, 6, and 7, which demonstrate a consistent relationship between model size, data size, and performance metrics. 2. Benefits for Low-Resource Languages: The paper also claims that larger multilingual ASR models improve performance for low-resource languages by reducing WER. This is generally well-supported by the results. But I'm not sure how the paper demonstrates that it "mitigates bias and improves the fairness of speech technologies." Without further analysis, this claim seems overstated. 3. Emergent Abilities: The section on emergent abilities could be clearer, particularly the discussion of mondegreen phenomena. 4. In-Context Learning (ICL): The ICL results appear inconsistent, lacking a clear trend across different model sizes and settings. Methods And Evaluation Criteria: Yes, WER/CER and BLEU are both standard criteria for measuring ASR and ST performance. Theoretical Claims: Not applicable, as this is an empirical paper. Experimental Designs Or Analyses: Yes. For the Mondegreen section, I am not particularly sure how PPL/MOS relates to ASR performance, or in other words, how they are useful. Does this phenomenon actually help reduce WER/CER, or is it simply an artifact of large-scale training? If not, why is this ability important enough to be discussed in this paper? Supplementary Material: Yes. A, B, C, D, and F. Relation To Broader Scientific Literature: This paper shows that multilingual ASR/ST models improve predictably with scale, especially for low-resource languages. It also explores emergent abilities like mondegreens and code-switching, though these findings need stronger proof. Additionally, it tests In-Context Learning (ICL) in ASR, but the results are unclear. Overall, the paper applies scaling laws to speech models and raises interesting questions about model behavior at large scales. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: This paper is well-written and easy to read. One minor comment is that the colors used in the figure are very friendly to color-blind people. Other Comments Or Suggestions: No. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank your for your insights and comments. We organize our response by section: ## Claims And Evidence: 2. We understand the reviewer’s concern. While our definition was indeed focusing on the fact that scaling leads to a better model for many languages, we do acknowledge that this definition may be too narrow. We will instead rephrase the fairness claim to “Scaling to larger models can improve ASR performance across the board, in both low and high resource languages, with significant improvements in the former category.” 3. We have included the accompanying audio for the mondegreen samples in Appendix Table 9 here: https://anonymous.4open.science/w/owls-samples-88B2/ , which we hope better illustrate the phenomena. If there any particular items that can be further clarified, we will be happy to make those changes as well. 4. We have included a few samples on example ICL generations https://anonymous.4open.science/r/owls-samples-C53F/README.md , which should better illustrate the improvements in ASR output as the number of in-context examples increase. While there is not a clear monotonic trend, we note that there are 3 clear clusters in ICL CER that are aligned with model size. 0.25B and 0.5B both have ~33.8 CER, 1B to 4B all have ~31.5 CER, and 9B to 18B have ~27.7 CER. ## Experimental Designs Or Analyses: > For the Mondegreen section, I am not particularly sure how PPL/MOS relates to ASR performance, or in other words, how they are useful. Does this phenomenon actually help reduce WER/CER, or is it simply an artifact of large-scale training? If not, why is this ability important enough to be discussed in this paper? PPL/MOS do not have a relationship with ASR performance, since the only goal of these metrics is to measure the English coherence of the mondegreen generation. We note that this phenomenon likely does not have any impact on overall WER/CER, and is probably indeed an artifact of scaling. However, we believe that it is important to discuss and present this evaluation, because it allows researchers to better understand the properties of neural networks that emerge at scale, and how it relates to the human perceptions of spoken language. By including this evaluation, we can show that there is much we do not understand about ASR models and how they scale, opening up potential new areas of research that were previously not possible due to lack of access to such models.
Summary: This paper empirically evaluates the scaling law for speech recognition and translation in terms of training data, model size, and compute cost, using a total of 350K hours of multilingual training data. Claims And Evidence: Yes Methods And Evaluation Criteria: N/A Theoretical Claims: This paper is primarily experiment-driven, this study does not present strong theoretical claims. Experimental Designs Or Analyses: Yes Supplementary Material: Yes, I read all the supplementary material. Relation To Broader Scientific Literature: This paper investigates the scaling law for speech, which aligns with the scaling law observed in text data for large language models. Essential References Not Discussed: The reference looks good to me. Other Strengths And Weaknesses: Strength This paper presents a valuable study, providing a comprehensive analysis of the effects of data scale, model size, and computational cost on speech recognition and translation. The extensive experimental evaluation is highly appreciated. Weakness The paper does not introduce novel ideas or algorithms but instead presents a series of experiments. Moreover, most of the conclusions are predictable, limiting the overall insight gained from the study. Additionally, I have concerns about the analysis, as the transcriptions are sourced from YouTube, and their accuracy remains uncertain, which might affect the conclusion drawn in the paper. Given the paper’s clear strengths and potential weaknesses, I would rate it as a weak acceptance. Other Comments Or Suggestions: See below Questions For Authors: 1. The conclusions drawn from the study may not be entirely reliable due to the noisy nature of the training data. Scaling trends observed here might not hold when using large-scale, high-quality transcribed audio data. This could also explain the discrepancy in Figure 2, whether the authors inspect the relationship between WER and the amount of training data that does not fully align with expected scaling laws. 2. The experimental results largely align with expectations and do not provide particularly surprising insights. E.g. the statement, “while parameter scaling can significantly improve ST performance, it cannot overcome cases where there is inherently insufficient data to learn the task”, is intuitive and aligns with prior expectations. 3. The paper does not specify the quality of transcriptions used for speech recognition and translation. The authors should describe how the transcriptions were obtained for both Yodas and OWSR datasets, as transcription quality significantly impacts scaling trends. 4. Model Size in Table 1: In Table 1, listing the model sizes of Canary and Whisper would provide better clarity. I assume the authors are using Whisper Large v3—please confirm. 5. The relevance of Table 2 is unclear. Given that WER is generally not highly sensitive to beam size, I would expect minimal variations. If this assumption is incorrect, the authors should provide supporting WER values to demonstrate the impact of beam size. 6. Since this is a multilingual ASR model, observations based on a single language may not generalize across different languages. The authors should clarify whether the observed trends are consistent across multiple languages. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable insights. > most of the conclusions are predictable, limiting the overall insight gained from the study We respectfully clarify that many of our insights are not predictable. While the notion that a larger model leads to better performance is indeed obvious, showing that exact performance improvements w.r.t downstream metrics like WER can be modeled as a scaling law is a novel insight. Many previous scaling law papers model only show scaling laws w.r.t test cross-entropy loss, which is a more predictable but less actionable result for researchers. > concerns about the analysis, as the transcriptions are sourced from YouTube, and their accuracy remains uncertain... Scaling trends observed here might not hold when using large-scale, high-quality transcribed audio data. All of our evaluation results are performed on standard benchmarks with high-quality transcriptions, such as FLEURS or CoVoST, commonly used in evaluating many large-scale models such as [1,2]. We only use YouTube transcriptions for helping train two models in this paper (OWSL 1B and 18B 360K), which means scaling curves in figures 1-4,6,7, and 9 are not affected. ## Questions 1. Most of our training data is high quality [4], as it comes from common academic benchmarks. We only use noisier transcriptions for OWLS 1B 360K and OWLS 18B 360K. The purpose of Figure 2 is to show scaling patterns across all languages, and that it is unfeasible to fit a language-agnostic universal scaling law. This is a reasonable expectation, since some languages are naturally more difficult to model for ASR than others (i.e Spanish vs Chinese) [5], so they will require more training data. We acknowledge that this point was not clearly made in the paper, and will better word it in the future drafts. 2. While we believe that the statement is indeed true for traditional ASR/ST models, we respectfully argue that such intuitions may not hold true for large-scale models like OWLS 18B. For example, [6] found that LLMs only require a small amount of leaked bilingual data to learn text translation. It is thus not completely out of the question that a sufficiently large ASR/ST model may be able to overcome traditional data limitations. Due to the potential openness of this question, we believe that this result is valuable evidence and proof to improve the scientific community’s understanding of large-scale models. 3. A breakdown of all datasets is found in Appendix A/Table 6. The OWSM datasets are high quality [4], as they come from common academic benchmarks, such as LibriSpeech, VoxPopuli, MuST-C, and CoVoST. For YODAS, we obtain a cleaner subset from the dataset authors. They use text language identification on the transcripts (to check that the text/audio language is aligned) and CTC segmentation on the audio with a pre-trained OWSM-CTC [7] model. The worst 10% of the utterances by CTC segmentation score are then discarded. We will make this information more apparent in the main paper body in future drafts. 4. We apologize for the oversight. Canary is 1B and Whisper is 1.5B. We are using Whisper Large V3, and will make it more clear in future drafts. 5. We ran each model in Table 2 on LibriSpeech test-other with beam size=1, and compare the results with that of Table 2 below, showing that WER is indeed sensitive to beam size. We will include this analysis in future drafts. | Model Size | Beam Size | WER | |-|-|-| | 0.25B | 1 | 9.6 | 0.25B | 10 | 8.3 | 2B | 1 | 5.2 | 2B | 4 | 4.7 | 4B | 1 | 4.6 | 4B | 2 | 4.5 | 9B | 1 | 4.5 | 9B | 2 | 4.5 6. We can assert that the observed trends are consistent across multiple languages, as we perform multilingual evaluations in almost every figure/table.The only exception is beam search (Table 2), which we will rectify by including the below results on FLEURS for Spanish and Turkish. Larger models with smaller beam sizes have better WER than smaller models with larger beam sizes. The only exception is 4B and 9B, where 4B already outperforms 9B with beam size=1. | Model Size | Beam Size | Spanish | Turkish | |-|-|-|-| | 0.25B | 1 |27.5|82.0 | 0.25B | 10 |22.9|69.5 | 2B | 1 | 10.6|42.3 | 2B | 4 | 9.4|34.5 | 4B | 1 | 9.0|34.5 | 4B | 2 | 8.4|30.6 | 9B | 1 | 9.7|29.2 [1] Robust Speech Recognition via Large-Scale Weak Supervision (Radford et al., ICML 2023). [2] Gemini: A Family of Highly Capable Multimodal Models (Gemini Team., 2024). [4] On the effects of heterogeneous data sources on speech-to-text foundation models (Tian et al., Interspeech 2024). [5] Language Complexity and Speech Recognition Accuracy: Orthographic Complexity Hurts, Phonological Complexity Doesn't. (Taguchi et al., 2024). [6] Searching for Needles in a Haystack: On the Role of Incidental Bilingualism in PaLM’s Translation Capability (Briakou et al., ACL 2023). [7] OWSM-CTC: An Open Encoder-Only Speech Foundation Model for Speech Recognition, Translation, and Language Identification (Peng et al., ACL 2024).
null
null
null
null
Demystifying Cost-Efficiency in LLM Serving over Heterogeneous GPUs
Accept (poster)
Summary: The paper investigated a highly practical problem of co-optimizing GPU composition, deployment configurations, and workload assignment in a heterogeneous GPU environment under a budget constraint to maximize Large Language Model (LLM) serving efficiency. The authors proposed a Mixed Integer Linear Programming (MILP) formulation to optimize the makespan for processing all workloads. Experimental results showed that the proposed method outperforms both homogeneous and heterogeneous baselines across a wide range of scenarios. Claims And Evidence: - The paper claims that LLM serving efficiency can be significantly influenced by heterogeneous GPU composition, deployment configuration, and workload assignment policy. This claim is well supported by the evidence in Figures 3 and 4, as well as the example provided in Section 4.2. - The paper claims that the proposed method significantly outperforms both homogeneous and heterogeneous baselines across a wide range of scenarios. This claim is generally supported by the experimental results. Methods And Evaluation Criteria: The method proposed in this paper has several significant problems. - In the formulated MILP problem, a deployment configuration $c$ is defined by a tuple $(v_c, s_c, o_c, h_{c,w})$, with $h_{c,w}$ defined as the "throughput of config $c$ on workload $w$", and $v_c$ indicating the GPU composition. However, I find this questionable, as the throughput should be affected by the GPU composition. This would mean $h_{c,w}$ is recursively defined. Here, it seems that the authors are treating the throughput as invariant to the GPU composition, which is not realistic. - Also for the $h_{c,w}$, solving the MILP problem requires it to be known in advance. This is a significant limitation, as measuring this throughput requires non-trivial effort and resources, especially considering the vast number of possible configurations and workloads. - The MILP formulation aims to optimize the makespan for processing all workloads. However, the makespan is affected by the distribution of workloads, a factor entirely absent from the MILP formulation. Theoretical Claims: The paper did not make any theoretical claims or proofs. ## Experimental Designs Or Analyses: The experimental design is generally well-structured, and the results are thoroughly analyzed. The authors compare their system's end-to-end performance against the primary baseline, HexGen, and conduct ablation studies to assess the effects of GPU composition, deployment configurations, and request assignment policies. In addition, they evaluate the effectiveness of their proposed MILP problem heuristic. Methods for extending to multi-model cases are also discussed. However, HexGen is currenty the only baseline used for comparison. Introducing more baselines would provide a stronger support for the paper's performance claim. Supplementary Material: I have read through all the supplementary materials in the appendices, from Appendix A to Appendix K. ## Relation To Broader Scientific Literature: The paper is mainly related to heterogeneous Large Language Model (LLM) serving, specifically exploring how a heterogeneous GPU configuration can reduce cost budget while improving the LLM inference efficiency. Essential References Not Discussed: No Other Strengths And Weaknesses: - The paper's overall presentation is generally clear and well-organized. - This paper focuses on a highly practical problem, exploring the challenges of optimizing LLM serving efficiency in a heterogeneous GPU environment under budget constraints. - Despite the questionable workload distribution setup, the experimental results are generally well-presented and thoroughly analyzed. Other Comments Or Suggestions: I have some very specific comments and suggestions for the authors to consider, listed below: - The y-axis label "Throughput" for the left subplots in Figure 3 may mislead readers, as it doesn’t reflect cost efficiency. I would suggest "Throughput Per Dollar" instead. - Also in Figure 3, the x-axis label "P5-P100 Latencies" for the right subplots needs a better explanation. Exactly what percentiles are being referred to here? - The Figure 4 is difficult to interpret. - I recommend explicitly stating what the bars in each subplot represent. The current x-axis labels seem incorrect and should likely be "Different Deployment Configurations" instead. - Additionally, Observation 2 from the paper is not apparent in this figure. - In the "Workloads and Assignment" paragraph of Section 4.3, the symbols $m$ and $M$ appear to be errors and should be corrected to $w$ and $W$, respectively. - The reference format requires substantial revision. Several cited works, published in conference proceedings, are inaccurately listed as arXiv preprints, which is misleading and needs correction. Questions For Authors: See Comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > 1. How the throughput h is profiled or defined? We agree with the reviewer that, because $h$ is recursively defined, it is impractical to profile every possible configuration exhaustively. However, in practice, this can be solved by employing a one-time profiling strategy that captures the following components. (This approach is referred to the profiling method used in Vidur MLSys'24.) - Inference prefilling latency: We profile the latency for a single transformer layer across varying tensor parallelism (TP) degrees, workload types, and GPU types. - Inference decoding latency: We profile the decoding latency for a single transformer layer under similar variations in TP degrees, workload types, and GPU types. - Pipeline communication latency: We measure the communication latency between different GPUs across various workload types. Using these measurements, the per-request latency for any configuration is estimated by combining the TP costs (both communication and computation) of all layers—which may be served by different GPUs and at varying TP degrees—with the pipeline parallelism (PP) communication cost. (Note that our heuristics, as discussed in Section 4.3 and Appendix D, largely reduce the profiling space, e.g., TP is employed only intra-machine.) Additionally, when estimating throughput, the prefill and decoding phases are treated separately: **(1)** The prefill phase is compute-bound, and its batched processing capacity is determined by the sum of the individual latencies; **(2)** the decoding phase is memory-bound, with its batched processing capability defined by a single latency value. This distinction has been validated in several studies (DistServe OSDI'24, Splitwise ISCA'24). |Diff Configs|Real (req/s)|Estimated (req/s)| |-|-|-| | H100(2,4)|0.56|0.60| | H100(4,2)|0.44|0.47| | H100(4,2)(cross machine)|0.42|0.44| | L40(2,4)|0.42|0.46| | L40(4,2)|0.21|0.22| | L40(4,2)(cross machine)|0.18|0.19| | H100+A100(4,2)(cross machine)|0.48|0.52| The table demonstrates examples of our cost estimation under long input and short output workload (i.e., workload 1 in Figure 4), the notation (2,4) indicates that the TP degree is 2 and the PP degree is 4. Although the estimations are not perfectly accurate, they are sufficiently reliable (estimation errors within 4%-7%) for selecting the optimal configurations. We will integrate the details of the one-time profiling into the updated version of our paper. > 2. Workload distribution absent from MILP formulation. Thanks for the reviewers' detailed and in-depth understanding of our paper. Our workload assignment method ensures that every workload type is completely allocated across all configurations, as enforced by the constraint: $\sum_{c \in C} x_{c,w} = 1, \forall\,w$. Although these variables are normalized to sum to 1, they can be scaled by the actual workload counts (e.g., 500 requests for type 1300 requests for type 2) during implementation. To avoid misunderstadning, we will revise the formulation in the updated version of our paper: "Let $f_w$ be the total number of requests for workload $w$. The time required on $c$ is given by $T_c = \sum_{w=1}^{W} \frac{x_{c,w} \cdot f_w}{y_c \cdot h_{c,w}}$." > 3. Integrating more heterogeneous baseline. ||Llama-30B|Llama3-70B| |-|-|-| |Helix |8.49 req/s|5.72 req/s| |Ours |11.43 req/s (35%↑)|7.13 req/s (25%↑)| We conduct additional experiments comparing our system with Helix (ASPLOS'25). Specifically, we compare our method against Helix under a price budget of $15 per hour on the AzureTrace dataset. While Helix optimizes heterogeneous LLM serving using max-flow and MILP scheduling, our method explicitly considers workload heterogeneity and GPU composition optimization, resulting in better cost efficiency. Experimental results show that our system outperforms Helix by 25–35%. We will integrate the baseline into our updated paper. > 4. Additional Suggestions. Thanks for the reviewers' detailed review. We will solve the following mistakes and update our paper: - We will update Figure 3, 4, 11, 12, 13 using "Throughput Per Dollar". - We will explain that the x sticks represent P5 Latency, P10 Latency, P15 Latency, etc. - - We will update Figure 4, 12, 13 using "Different Deployment Configurations" instead of "Different GPU types". - We will clarify that different bars within Figure 4 represent different deployment methods, and the optimal method varies according to workload, model type, and GPU type. - We will correct m and M to w and W. - We will check all the cited papers and update the references to the published ones, e.g., sarathi-serve (OSDI'24), HexGen (ICML'24), WildChat (ICLR'24) etc. --- Rebuttal Comment 1.1: Comment: Thanks for your efforts and clarification. The rebuttal has successfully addressed some of my concerns, and I have updated my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your acknowledgment. We will revise the paper according to your suggestion.
Summary: This paper investigates cost-efficient LLM serving on heterogeneous GPUs. It benchmarks various GPU types and develops a MILP-based scheduling algorithm to optimize deployment configurations. The study shows that leveraging heterogeneous GPUs improves cost efficiency compared to homogeneous setups. Claims And Evidence: Claim: Heterogeneous GPU compositions improve cost-efficiency over homogeneous deployments. Evidence: Benchmarking results demonstrate that different GPU types align well with varying workload demands, leading to improvement in cost-efficiency. Claim: Optimal deployment configurations significantly affect performance. Evidence: Experimental results show that tuning deployment strategies (e.g., tensor parallelism vs. data parallelism) improves performance. Claim: The proposed MILP-based scheduling algorithm outperforms existing heterogeneous serving frameworks. Evidence: Comparisons with HexGen show up to 18% improvement in throughput. Methods And Evaluation Criteria: The study evaluates its approach on real-world and synthetic workload traces, including Llama3-8B and Llama3-70B models. The study lacks a discussion on the impact of fluctuating real-time workloads. Theoretical Claims: The paper does not introduce new theoretical models but formulates GPU scheduling as a MILP problem. Experimental Designs Or Analyses: While the evaluation is thorough, there are several weaknesses remain: The paper does not address how the system handles sudden bursts of traffic or fluctuating real-time workloads. While the proposed method improves cost-efficiency, it lacks details on handling conflicts between prioritizing throughput and latency. The paper does not clearly articulate why adding budget constraints to existing LP solvers is challenging, given prior work like Melange. Supplementary Material: The supplementary material includes additional details on GPU compositions and scheduling configurations but does not address the fundamental limitations of modeling assumptions. Relation To Broader Scientific Literature: This is a growing area but novelty with respect to existing literature is think or not clear. Essential References Not Discussed: QLM (SoCC 2024): Addresses SLO-aware LLM serving and multi-node optimizations. Sarathi Serve (OSDI 2024): Implements prefill chunking for hybrid execution batches. Vidur (MLSys 2024): Solves the problem of deployment tuning Other Strengths And Weaknesses: see rest of the review. Other Comments Or Suggestions: How does the approach handle workload spikes and dynamic GPU availability fluctuations? What are the trade-offs between prioritizing cost-efficiency and request latency? Under what scenarios does the MILP-based scheduling significantly diverge from existing heterogeneous scheduling frameworks? Questions For Authors: Please see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > 1. Workload spikes and dynamic GPU availability fluctuations. Online rescheduling to adapt to workload changes and GPU drops is an interesting idea that can easily be integrated into our current solution. We introduce this approach and present some preliminary experimental results. **Solution: Online replanning.** To address the reviewer's concern, we implement an online replanning mechanism analogous to the one proposed in DistServe (OSDI'24). Concretely, the system could **(1)** monitor the real-time composition of incoming workloads (mentioned in Section 3), **(2)** track GPU resource availability within the cloud environment, and **(3)** upon detecting a significant shift in workload distribution, (e.g., an increase in the proportion of certain workload types) the scheduling algorithm could be executed again, incorporating recent historical data to produce an updated serving plan. | Status| Workload Change| GPU Drop| |------------|-------------------------|--------------------| | Before| 26.89 req/s| 26.89 req/s| | After| 23.70 req/s (13%↓)| 20.80 req/s (29%↓) | | Replanning| 29.61 req/s (25%↑)| 22.85 req/s (10%↑)| **Replanning results.** **(1)** We test the workload surge in short output requests in AzureTrace. Before surge, the optimal GPU composition is {20%, 65%, 15%} for datacenter, workstation, and consumer GPUs, achieving 26.89 req/s. After workload change, the throughput degrades to 23.7 req/s. In this case, replanning (shifting allocation to {63%, 23%, 14%}) boosts throughput to 29.61 req/s. **(2)** We also test the case when GPU drop happens (4 H100s down), the throughput falls from 26.89 to 20.80 req/s. In this case, replanning raises throughput to 22.85 req/s. Additionally, the rescheduling and model-weight reloading phases can be completed within 1–2 minutes, which is significantly shorter than the hourly timescale of real-world workload changes. We will integrate this discussion into Section 6 of our updated draft. > 2. The trade-offs between prioritizing cost-efficiency and request latency. **Trade-offs.** We acknowledge that there are trade-offs between optimizing cost-efficiency and latency. **(1)** Prioritizing cost-efficiency typically involves using fewer resources (i.e., lower budgets), which can lead to slightly higher response latencies. **(2)** In contrast, prioritizing latency often requires utilizing more resources (i.e., incurring higher costs). **Our focus.** Despite the trade-offs, our work mainly focuses on the cost-efficiency for two reasons. **(1)** Some inference tasks do not require extremely low latency; meeting a predefined latency threshold (e.g., reading/speaking speed) is usually sufficient. And more importantly, **(2)** in resource-limited scenarios, where systems are naturally under-provisioned, emphasizing throughput can also indirectly improve latency by reducing queuing delays. Our experimental results in Figure 6 demonstrate that our method achieved the lowest P99 latency among all compared baselines. We will integrate the above discussion into Section 6 of the updated draft. > 3. Why adding budget constraints to existing LP solvers (e.g., Melange) is challenging? Existing heterogeneous frameworks require substantial redesign of their scheduling algorithms or significant additional system development to achieve cost optimization comparable to our approach. **Compare with Melange.** **(1)** Melange assigning each workload to a *single* GPU type, overlooking the impact of parallelism strategies on resource allocation. As shown in Section 3, tuning parallelism strategies is crucial, and incorporating this dimension significantly expands the search space and demands additional system development. **(2)** Melange assumes unlimited resources, yet in practice, GPU availability and budget constraints are real issues that necessitate evaluating workloads across *multiple* GPU types to find a globally optimal solution—further expanding the search space and rendering Melange’s approach impractical. **Compare with HexGen and Helix.** Both approaches **(1)** fail to consider workload heterogeneity and the presence of mixed workloads during scheduling, and **(2)** their scheduling algorithms are designed for fixed heterogeneous cluster configurations, missing opportunities for further performance improvements achievable through GPU composition optimization. > 4. Essential References Not Discussed. QLM (SoCC 2024) focuses on SLO-aware serving and multi-node optimizations by refining request ordering; Sarathi Serve (OSDI 2024) optimizes batching through prefill chunking to mitigate interference between the prefill and decoding stages; and Vidur (MLSys 2024) develops an accurate simulator for deployment tuning. Our work has a different goal—it is dedicated to achieving heterogeneous, cost-efficient serving in cloud environments. We will integrate the discussion of these references into Section 2 of the updated draft.
Summary: This paper focuses on the cost efficiency of LLM services on heterogeneous GPUs, proposing ways to improve efficiency by optimizing GPU composition, deployment configurations, and workload allocation. Claims And Evidence: When discussing related work, the paper mainly emphasizes that other methods do not consider GPU heterogeneity and user budget constraints, but does not analyze in depth the specific shortcomings of existing methods, e.g., whether pre-existing methods can achieve similar cost-optimization results by adjusting their strategies. Methods And Evaluation Criteria: The paper assumes that task assignments are all computed before the task is executed, while in real cloud environments, task arrivals are dynamic and the paper does not provide an online scheduling strategy that adapts to dynamic changes in tasks. Theoretical Claims: Yes Experimental Designs Or Analyses: The paper's experiments are mainly based on the Vast.ai platform, which has small GPU availability and the experimental results may be difficult to generalize to large-scale cloud environments. Supplementary Material: Yes Relation To Broader Scientific Literature: The paper only compares a few heterogeneous LLM scheduling methods and lacks comparative analysis with a wider range of scheduling optimization methods (e.g., reinforcement learning methods, heuristic scheduling methods) Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: The organization of this paper is good. Weaknesses: 1. The research problem of the paper focuses too much on the cost-effective optimization of heterogeneous GPUs, but other factors that may affect the cost (e.g., network latency, energy consumption, computational complexity of scheduling algorithms, etc.) are under-explored. 2. The paper fails to address more complex scheduling challenges in cloud environments, such as scheduling GPU resources across data centers and uncertainty due to preemptive instances. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > 1. Specific shortcomings of existing methods. Existing methods require a heavy redesign of the scheduling algorithms or demand significant additional system development to achieve similar cost optimization. **Compare with HexGen and Helix.** Both approaches (1) fail to consider workload heterogeneity and the presence of mixed workloads during scheduling, and (2) their scheduling algorithms are designed for fixed heterogeneous cluster configurations, missing opportunities for GPU composition optimization. **Compare with Melange.** Melange does not consider the impact of parallelism strategies or cloud resource constraints on deployment performance. Its method fails to utilize different parallelism strategies or different GPU types to handle various workloads. > 2. Adapting to dynamic tasks and preemption uncertainty. Online scheduling to adapt to workload changes and GPU drops is an interesting idea that can easily be integrated into our current solution: **Online replanning.** We implement an online replanning mechanism analogous to the one proposed in DistServe (OSDI'24). Concretely, the system could (1) monitor the real-time composition of incoming workloads, (2) track GPU resource availability within the cloud environment, and (3) upon detecting a significant shift in workload distribution, the scheduling algorithm could be executed again, incorporating recent historical data to produce an updated serving plan. || Workload Change|GPU Drop| |-|-|-| |Before|26.89(req/s)|26.89| |After|23.70(13%↓)|20.80(29%↓)| |Replanning|29.61(25%↑)|22.85(10%↑)| **Experimental Results.** (1) We test the workload surge in short output requests in AzureTrace. Before surge, the optimal GPU composition is {20%, 65%, 15%} for datacenter, workstation, and consumer GPUs, achieving 26.89 req/s. After workload change, the throughput degrades to 23.7 req/s. In this case, replanning (shifting allocation to {63%, 23%, 14%}) boosts throughput to 29.61 req/s. (2) We also test the case when GPU drop happens (4 H100s down), the throughput falls from 26.89 to 20.80 req/s. In this case, replanning raises throughput to 22.85 req/s. Additionally, the rescheduling and model-weight reloading phases can be completed within 1–2 minutes, which is significantly shorter than the hourly timescale of real-world workload changes. > 3. Vast.ai platform has small GPU availability. GPU shortages are a well-documented challenge in cloud-based tasks (SkyPilot NSDI'23). This issue is not unique to Vast.ai—similar GPU availability limitations are observed across many widely-used platforms, including FluidStack, DataCrunch, and RunPod. Even major cloud providers, such as Google Cloud, face GPU quota constraints, as noted in Table 4 of Helix (ASPLOS'25), with quotas being limited and varying across regions. > 4. Compare other scheduling methods. **Scheduling algorithm comparison.** We further compare our method with a heuristic scheduling method by replacing the plan optimization component of MILP with population-based mutation and selection (i.e., genetic algorithm). In a 48-GPU cluster, the heuristic method requires 115 seconds to achieve comparable performance to our MILP formulation, which only requires 30 seconds. **End-to-end system comparison.** We compared our method with HexGen in our paper, here, we further compare our system with Helix (ASPLOS'25), which optimizes heterogeneous LLM serving using max-flow and MILP scheduling, under a price budget of $15 per hour on the AzureTrace dataset. Our method explicitly considers workload heterogeneity and GPU composition optimization, resulting in greater cost efficiency. ||Llama-30B|Llama3-70B| |-|-|-| |Helix|8.49(req/s)|5.72| |Ours|11.43(35%↑)|7.13(25%↑)| > 5. Other factors that may affect the cost. **Network Latency impact.** We have incorporated network latency considerations into our MILP formulation. For instance, (1) the GPU-to-GPU communication latency is used to determine the pipeline communication cost, and (2) tensor parallelism with high communication volumes is prioritized to use intra-machine networks. **Energy consumption.** Our focus is optimizing the cost efficiency of cloud-based LLM serving; optimizing energy consumption is orthogonal to our paper’s scope. However, we acknowledge that energy consumption is a critical metric and could be considered as a future direction. **Computational complexity.** Thank you for mentioning this. We will include a detailed discussion of computational complexity of the MILP formulation in Section 4.3 of the updated draft: “the theoretical worst-case time complexity scales as: O(poly(|C|,W,N)×2^|C|), the polynomial factor accounts for the overhead of processing each node in the search tree”. > 6. Scheduling across data centers. We have already used GPUs across data centers in our heterogeneous experiment setup, since Vast.ai provides different GPUs in different data centers (e.g., A40 and A100 GPUs reside in Australia and New Jersey, US).
null
null
null
null
null
null
null
null
AffectGPT: A New Dataset, Model, and Benchmark for Emotion Understanding with Multimodal Large Language Models
Accept (oral)
Summary: This paper focuses on descriptive emotion understanding. Compared with discriminative emotion recognition, descriptive emotion understanding provides the possibility of modeling complex emotions. To promote the development of this field, they proposed new datasets (MER-Caption and MER-Caption+) and new models (AffectGPT). They also established a benchmark (MER-UniBench) for MLLM-based descriptive emotion understanding solutions. Experimental results verify the effectiveness of their proposed framework and dataset. Claims And Evidence: Yes. This paper contains extensive experiments to support its conclusions. Methods And Evaluation Criteria: Yes. Theoretical Claims: This is not a theoretical paper. Experimental Designs Or Analyses: Yes. Their experimental design and analysis are clear and reasonable. This paper first reports the performance of different MLLMs on MER-UniBench, demonstrating the effectiveness of their proposed AffectGPT framework. Then, they compare different instruction datasets under the same framework and training settings, demonstrating the effectiveness of their proposed dataset (MER-Caption+). At the same time, the ablation study of dataset filtering techniques, the necessity of pre-fusion operations, the discussion on the impact of input modality, and the user study of dataset quality assessment also demonstrate the effectiveness of each proposed module in this paper. Supplementary Material: Yes. The code and dataset are provided in the supplementary material. Relation To Broader Scientific Literature: The key contribution of this paper is advancing the field of descriptive emotion understanding through the introduction of new datasets, architectures, and benchmarks. Essential References Not Discussed: No, essential related work has been correctly cited. Other Strengths And Weaknesses: Despite the extensive experiments conducted in this paper, there are several additional experiments and discussions that could further enhance the paper's comprehensiveness: 1. While the paper discusses the choice of LLMs (as shown in Figure 4), it would be beneficial to also explore and discuss the impact of different audio and video encoders. 2. Please further discuss the influence of rank selection in LoRA. 3. It appears that this paper does not discuss how videos are sampled for the video branch. 4. Additionally, the paper should discuss the impact of varying the number of sampled frames on the model's performance. 5. In Figure 2, the dataset construction pipeline could benefit from further explanation. 6. In Table 3, please explain the details of the filtering technique you used for the general instruction dataset. Other Comments Or Suggestions: Please refer to my comments on the weakness part. Questions For Authors: Please refer to my comments on the weakness part. Ethical Review Concerns: Not needed. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1:** It would be beneficial to also explore and discuss the impact of different audio and video encoders. **A1:** **(1) Impact of Audio encoders.** The choice of audio encoder does not significantly impact performance. This confirms that AffectGPT's remarkable performance is primarily attributed to our proposed high-quality, large-scale dataset and effective framework rather than acoustic encoders. Meanwhile, we note that ImageBind achieves slightly worse performance compared to other audio encoders. One possible explanation for this is that other audio encoders are widely applied in audio content understanding tasks (such as ASR), where audio content plays a crucial role in emotion recognition. |Audio Encoder|MER-UniBench| |-|-| |ImageBind|73.81| |Data2vec_Base|74.48| |WavLM_Large|74.39| |​HUBERT_Large|​74.77| **(2) Impact of Video Encoder.** The choice of video encoder has a limited impact on performance. Interestingly, CLIP\_VIT slightly outperforms EVA\_CLIP and DINOv2, consistent with the findings in MERBench [1], a unified benchmark for traditional categorical frameworks. These results suggest that conclusions drawn from traditional categorical frameworks, such as encoder selection, may also be applicable to MLLM-based descriptive frameworks. |Video Encoder|MER-UniBench| |-|-| |EVA_CLIP|73.42| |DINOv2|73.12| |SigLIP|74.63| |​CLIP_VIT|​74.77| [1] Lian, Zheng, Licai Sun, Yong Ren, Hao Gu, Haiyang Sun, Lan Chen, Bin Liu, and Jianhua Tao. "Merbench: A unified evaluation benchmark for multimodal emotion recognition." arXiv preprint arXiv:2401.03429 (2024). **Q2:** Please further discuss the influence of rank selection in LoRA. **A2:** In the following table, we count the increase in trainable parameters when using LoRA for the LLM branch. The first row represents the model without the LoRA module. Experimental results demonstrate that fine-tuning the LLM with LoRA improves performance compared to models without LoRA. However, increasing the rank for LoRA-based models does not yield significant performance gains and instead increases computational costs. |Rank|Increased Parameters|MER-UniBench| |-|-|-| |-|0|73.30| |8|20,185,088|74.65| |16|40,370,176|74.77| |32|80,740,352|​74.92| **Q3:** How are videos sampled for the video branch? **A3:** We uniformly sample 8 frames per video by default. **Q4:** The impact of varying the number of sampled frames on the model's performance. **A4:** In the following table, we compare two types of inputs: (1) face-only and (2) face-text combinations, and evaluate model performance across different sampling frame counts, ranging from 2 to 64. We observe that using too few frames (e.g., fewer than 2) results in a noticeable decline in performance, indicating that insufficient frames lead to information loss. However, further increasing the number of sampling frames (e.g., more than 8) does not yield significant performance improvements. This can be attributed to the fact that MER tasks typically use short-duration videos with relatively stable facial expressions. Therefore, we default to sampling 8 frames per video in this paper. |Input Modality|0|2|4|8|16|32|64| |-|-|-|-|-|-|-|-| |Face-only|-|59.17|60.76|60.47|61.26|61.10|61.49| |Face-text|67.44|72.94|73.54|74.31|73.51|74.10|​74.23| **Q5:** The dataset construction pipeline could benefit from further explanation. **A5:** We adopt a **model-led, human-assisted** annotation strategy. This approach leverages human priors to guide description generation and sample filtering, ultimately enabling automatic annotation for unlabeled data. **(1) Description Generation.** During the description generation process, we first conduct preliminary experiments. In this phase, a small subset of samples is selected, and annotators are asked to assign fine-grained emotional labels to each sample. Based on the insights gained from these preliminary experiments, we perform a model selection process to ensure the quality of the automatically generated descriptions. **(2) Sample Filtering.** During sample filtering, we employ a two-stage filtering technique. In the first stage, we remove samples with mismatched audio and video, as well as those with abnormal description lengths. In the second stage, we use *model-based crowdsourcing* to generate relatively reliable emotion labels. If the labels derived from the descriptions differ significantly from those obtained through *model-based crowdsourcing*, the descriptions are considered to be of low quality and are then removed. In summary, we integrate human priors into both description generation and sample filtering to ensure the high quality of the generated descriptions in MER-Caption+. **Q6:** The details of the filtering technique. **A6:** For general instruction datasets, we utilize the prompt provided in Appendix E to extract emotion labels from each instruction-answer pair. Samples that result in empty emotion outputs are removed.
Summary: This paper introduces a new dataset for the multimodal emotion recognition (MER) task. The dataset is constructed using a model-driven, human-assisted approach. Initially, a coarse-grained dataset is generated through data description, followed by fine-grained data refinement through both low-level and high-level filtering. Additionally, a novel MLLMs-based method, "AffectGPT," is proposed, which enhances model performance by integrating audio and video feature vectors (pre-fusion) before feeding them into the LLM. This paper also presents a new MER evaluation standard to assess model performance in MER tasks. Several experiments validate the effectiveness of the dataset, the proposed method, and the evaluation standard. ## update after rebuttal: I will keep my score as it is, even after the rebuttal, which addresses all of my comments. Claims And Evidence: The claims have been supported by experimental analysis. For example: "We propose a pre-fusion operation that moves the cross-modal interaction outside the LLMs, further enhancing multimodal integration." Additionally, I agree that the current community still lacks large-scale datasets with detailed, descriptive emotion annotations necessary to fully realize the potential of MLLMs. Methods And Evaluation Criteria: - Mainstream AV-LLMs leave everything of cross-modal interaction to the LLMs, which is insufficient for handling MER with multimodal characteristics. >> confirmed by experiments. Strengths - Novel two-level filtering (low-level modality alignment + high-level model crowdsourcing) - Multi-level emotion grouping metrics (Level 1-3) handle free-form outputs effectively Weaknesses - Limited discussion on computational costs of pre-fusion operations Theoretical Claims: The pre-fusion mechanism lacks theoretical analysis of modality interaction dynamics. Experimental Designs Or Analyses: Yes, it is valid. Supplementary Material: The supplementary material includes the dataset labels and code, along with an appendix. The appendix primarily covers dataset comparisons and prompts. It is well-written and provides valuable details. Relation To Broader Scientific Literature: The affective computing community is vast, and this dataset is currently the largest available, with a well-justified data collection procedure. The proposed method also demonstrates strong performance on existing datasets, and the suggested metrics appear reasonable. Therefore, I believe this work will be a valuable contribution to the community. Moreover, the data collection procedure could serve as a guideline for other human studies aiming to collect large-scale datasets. This makes it potentially useful not only for the affective computing field but also for other research areas. Essential References Not Discussed: There is extensive research on these topics, but in assessing this paper, I did not find any major references missing. Other Strengths And Weaknesses: Strengths: 1. This paper introduces a new MER dataset along with a method for collecting and constructing the dataset. Compared to other related datasets, it offers higher quality, a larger volume of data, and more diverse classifications. 2. This paper proposes a new MER evaluation standard, "MER-UniBench," which provides a comprehensive evaluation framework for MLLMs-based MER emotion understanding tasks. 3. Extensive experiments have been conducted to validate the dataset and evaluation standard. Weaknesses: 1. The structural innovation of AffectGPT is relatively insufficient, and it does not fully explain why the "Pre-fusion" operation can serve as a solution for cross-modal interaction. Other Comments Or Suggestions: Originality - Novel model-led annotation paradigm - Pre-fusion mechanism builds on established cross-modal attention Significance - Large-scale MLLM-focused emotion dataset - Practical metrics for free-form emotion analysis Questions For Authors: 1. Regarding the ablation study of the model, since the video and audio features have already been fused, why are these features still fed separately into the LLMs? Have experiments been conducted using only the fused features? 2. What is the purpose of designing the evaluation metric FskF^k_s, or in other words, what specific aspect does this metric reflect? 3. Is it possible that MER-Caption+ also contains some instances where the data does not match the descriptions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1:** Limited discussion on computational costs of pre-fusion operations. The pre-fusion mechanism lacks theoretical analysis of modality interaction dynamics. The structural innovation of AffectGPT is relatively insufficient, and it does not fully explain why the "pre-fusion" operation can serve as a solution for cross-modal interaction. **A1:** **Regarding computational efficiency, the pre-fusion operation only involves Q-Former or attention operations, which are significantly less computationally intensive than LLMs. Theoretically, the Q-Former facilitates cross-modal interaction by distilling knowledge from multimodal content into query tokens, while the attention mechanism achieves this through dynamically predicted attention weights based on multimodal inputs.** **The model architecture innovation represents one aspect of our work, and our contributions extend significantly beyond this.** Specifically, we introduce the largest-scale emotion description dataset to date, constructed using an efficient model-led, human-assisted approach; we establish a comprehensive benchmark encompassing three key tasks: fine-grained emotion recognition, basic emotion classification, and sentiment analysis; and we develop specialized evaluation metrics designed for assessing free-form text outputs. **Q2:** Regarding the ablation study of the model, since the video and audio features have already been fused, why are these features still fed separately into the LLMs? Have experiments been conducted using only the fused features? **A2:** Thanks for your valuable comment. Following your suggestion, we conducted additional experiments to investigate this aspect. **Our results demonstrate that incorporating raw audio and video features alongside the fused features yields modest performance improvements compared to using fused features alone.** | Input Type | MER-UniBench Score | |-----------------------------------------|-------------------:| | Audio, video features | 72.95 | | Audio-video fused features | 74.14 | | ​Audio, video, and audio-video fused features | ​74.77| **Q3:** What is the purpose of designing the evaluation metric $F_s$, or in other words, what specific aspect does this metric reflect? **A3:** In this paper, $Precision_s$ indicates the number of correctly predicted labels, and $Recall_s$ indicates whether the prediction covers all ground truth. **$F_s$ is a harmonic mean of two metrics $Precision_s$ and $Recall_s$.** **Q4:** Is it possible that MER-Caption+ also contains some instances where the data does not match the descriptions? **A4:** Thanks for your valuable comment. **Yes, there may be inaccurate descriptions in MER-Caption+ because we used an automatic annotation strategy without manual checks.** However, the experimental results in Table 3 demonstrate that MER-Caption+ achieves significantly better performance than the manually annotated MAFW dataset. The main reason is that humans tend to focus on major clues, which can easily lead to incomplete descriptions. **These results confirm that, despite the lack of manual checks in MER-Caption+, we can still ensure the quality of the labels. In the future, we will investigate other post-filtering techniques to further improve MER-Caption+'s annotation quality.**
Summary: This paper presents a new video content description dataset with emotional words and highlights a novel annotation method for the dataset. Additionally, it proposes a model that enhances multimodal emotion recognition. The primary innovation of this model lies in its pre-fusion strategy for multimodal inputs. Finally, the paper introduces a new evaluation benchmark for assessing the performance of emotion recognition models. ## update after rebuttal I would like to keep my rating. Claims And Evidence: The claims proposed in this paper are mostly supported by experimental evidence or have been mentioned in previous studies, and the reasoning is relatively sound. Methods And Evaluation Criteria: The method proposed in this paper effectively helps address the challenges in current emotion recognition tasks. The dataset construction method reduces the difficulty of dataset creation. The model, which adopts a new strategy, also demonstrates improved performance. Finally, the benchmark introduced in the paper provides an effective means to evaluate the performance of multimodal emotion recognition models. Theoretical Claims: I have indeed verified the correctness of the theories proposed in the paper. The paper includes numerous ablation experiments on the proposed methods. The results of these experiments validate the correctness of the proposed approaches. For example, Section 5 of the paper focuses on presenting various experimental results. Experimental Designs Or Analyses: The paper clearly describes the design details of each experiment. When presenting comparative experiments, it frequently emphasizes "fairness." I also believe that the authors have made every effort to conduct fair comparative experiments. Supplementary Material: I have also gone through most of the supplementary materials, paying special attention to the parts that interest me. Additionally, I have identified some detailed issues within them. Relation To Broader Scientific Literature: The paper builds upon previous multimodal emotion recognition research by summarizing and enhancing existing work. The authors propose new contributions, which are validated through experiments. The dataset introduced in the paper is also an innovation based on the experiences of previous studies. Finally, the benchmark presented serves as a comprehensive evaluation tool for assessing the capabilities of multimodal emotion recognition models. Essential References Not Discussed: No. Other Strengths And Weaknesses: The main advantages of this paper are as follows: 1. The dataset proposed in the paper is large in scale and of high quality. The relevant experiments are thorough, and a user study has been conducted. While ensuring that the dataset can be automatically annotated, the quality of the data is also maintained. The proposed annotation and filtering methods have been experimentally validated for effectiveness. 2. The model proposed for multimodal emotion recognition performs well. The effectiveness of the key component, the multimodal pre-fusion strategy, has also been validated through experiments. 3. The paper introduces a new benchmark for evaluating the performance of multimodal emotion recognition models. This benchmark can comprehensively assess the model's ability to handle various multimodal emotion recognition tasks. Although the paper has many advantages, there are still some details that need attention: 1. In Appendix F, experiments have shown that using a combination of SALMONN and mPLUG-Owl results in better performance. However, the authors still chose the combination of SLAMONN and Chat-UniVi. An explanation for this choice is needed. 2. The origin of the raw data used in the dataset construction is not provided by the authors. 3. A large amount of data was filtered out during the dataset construction process, which limits the application scenarios of the dataset. 4. The evaluation metric used in the benchmark for basic emotion recognition tasks may have the risk of being inaccurate. The current evaluation metric only calculates the number of correctly matched emotion words in the output of the multimodal emotion recognition model. However, if the model outputs entirely incorrect results, the impact of these errors is not accounted for. 5. In Table 3, the performance of the LLaVA dataset decreased after applying the filtering strategy. The authors did not explain what the filtering strategy was. If it is the strategy mentioned in the paper, it suggests that this filtering strategy does not work well for the LLaVA dataset. The authors should analyze the reason. 6. Although the paper proposes a new multimodal emotion recognition model with significant performance improvement, the model's innovation compared to previous work is limited, as it mainly adds a pre-fusion mechanism between multiple modalities. Other Comments Or Suggestions: The paper is overall well-written, and I hope the authors can provide explanations for the various detailed issues mentioned above. Questions For Authors: The specific issues can be found in the weakness mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our fair comparative experiments, innovative dataset, effective key components, and comprehensive evaluation benchmark. **Q1:** In Appendix F, experiments have shown that using a combination of SALMONN and mPLUG-Owl results in better performance. However, the authors still chose the combination of SLAMONN and Chat-UniVi. An explanation for this choice is needed. **A1:** Thanks for your careful review. In this paper, we do not use the combined results for model selection but instead rely on the performance of individual models. For example, for VLLM, Chat-UniVi outperforms mPLUG-Owl, and for ALLM, SALMONN outperforms SECap. Therefore, we use the combination of Chat-UniVi and SALMONN for description generation. The combination experiments are primarily designed to demonstrate that integrating multimodal cues can lead to better performance. Your suggestion of using the combined results for model selection is insightful, and we will conduct more experiments in this direction. **Q2:** The origin of the raw data used in the dataset construction is not provided by the authors. **A2:** **The raw data comes from the unlabeled portions of MER2024, which is used with permission from the dataset owners.** In this work, we annotate each unlabeled sample with emotion descriptions. The complete annotation procedure is illustrated in Figure 2. **Q3:** A large amount of data was filtered out during the dataset construction process, which limits the application scenarios of the dataset. **A3:** We would like to highlight that **dataset quality is equally important as quantity**. It is verified by the experimental results in Table 4: Given the same testing set (with diverse sources/scenarios), the model trained on the filtered datasets performs better than the one trained on the noisy dataset. **Q4:** The evaluation metric used in the benchmark for basic emotion recognition tasks may have the risk of being inaccurate. The current evaluation metric only calculates the number of correctly matched emotion words in the output of the multimodal emotion recognition model. However, if the model outputs entirely incorrect results, the impact of these errors is not accounted for. **A4:** Thanks for your comment. **In fact, we have taken this into account during the metric design process.** Basic emotion recognition tasks provide the majority-voted labels $y$, which are generally reliable. However, emotion descriptions produce free-form outputs $\mathbf{\hat{Y}}$ that may contain multiple labels, including fine-grained ones beyond basic emotions. **Therefore, we use the HIT rate as the metric, ensuring that the basic label $y$ should be at least in $\mathbf{\hat{Y}}$.** **Meanwhile, we attempt to design metrics for evaluating potentially incorrect labels in $\mathbf{\hat{Y}}$. However, the labels in $\mathbf{\hat{Y}}$ that differ from the basic label $y$ are not necessarily incorrect - they may represent some fine-grained emotions not covered by basic categories. But since basic emotion recognition tasks lack fine-grained reference labels, we have not yet established appropriate evaluation metrics for this purpose. This remains an important research direction for our future work.** **Q5:** In Table 3, the performance of the LLaVA dataset decreased after applying the filtering strategy. The authors did not explain what the filtering strategy was. If it is the strategy mentioned in the paper, it suggests that this filtering strategy does not work well for the LLaVA dataset. The authors should analyze the reason. **A5:** Thanks for your comment. Regarding general instruction datasets, we use a filtering process to retain only emotion-related instruction-answer pairs. Specifically, **we use the prompt in Appendix E and extract emotion labels from each instruction-answer pair. Samples yielding empty emotion outputs are removed.** As shown in Table 3, this filtering approach proves less effective for the LLaVA dataset. We hypothesize that the detailed descriptions in non-emotion subsets may also provide valuable cues for inferring emotional states in some scenarios. **Q6:** Although the paper proposes a new multimodal emotion recognition model with significant performance improvement, the model's innovation compared to previous work is limited, as it mainly adds a pre-fusion mechanism between multiple modalities. **A6:** **The model architecture novelty is only a part of our work, and our contributions extend significantly beyond this.** Besides the model architecture, we also use a model-led, human-assisted strategy to minimize human effort while constructing the largest multimodal emotion dataset to date. Also, we present a comprehensive benchmark covering fine-grained emotion recognition, basic emotion classification, and sentiment analysis, with metrics designed for free-form text outputs.
Summary: This paper introduces a new dataset, pre-fusion model, and evaluation benchmark to advance multimodal, natural language-based emotion understanding. It proposes a model-led, human-assisted strategy to minimize human effort while constructing the largest multimodal emotion dataset to date. The model features a pre-fusion architecture that integrates audio and video signals before combining them with text, enhancing multimodal fusion. Additionally, the paper presents a comprehensive benchmark covering fine-grained emotion recognition, basic emotion classification, and sentiment analysis, with metrics designed for free-form text outputs. Claims And Evidence: N/A Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: 1. Lack of Baseline Comparisons on MER-Caption+: To better isolate the impact of the new dataset vs. the new model design, it would be helpful to compare baseline models trained on MER-Caption+ against AffectGPT trained on MER-Caption+. This would clarify whether the performance boost comes primarily from the dataset or the model architecture. 2. Dataset Coverage and Diversity: The paper does not specify the scenarios included in the dataset (e.g., daily conversations, news reports, or movie reviews). Additionally, it is unclear whether the dataset primarily focuses on first-person speaking or also includes multi-person videos, which could affect its generalizability. 3. Impact of Frame Signals on Performance: The results suggest that using face signals alone outperforms frame signals, raising the question of whether frame signals introduce more noise. It would be valuable to explore whether this issue can be mitigated through data cleaning or improved visual signal alignment with other modalities. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper introduces a comprehensive dataset with detailed natural language annotations, enabling a richer representation of emotions beyond predefined labels. It adopts a model-led, human-assisted annotation strategy, effectively balancing scale and label quality. Additionally, the paper presents a rigorous evaluation pipeline, featuring in-depth ablation studies on the dataset, model, and LLMs. Notably, it also includes a user study to assess and compare dataset annotations, further validating its approach. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our work's significance and your acknowledgment of our contributions—including the novel dataset, model architecture, and comprehensive benchmark for descriptive emotion understanding. These innovations enable richer, more flexible emotion representation that extends beyond conventional predefined-label paradigms. **Q1:** Lack of Baseline Comparisons on MER-Caption+: To better isolate the impact of the new dataset vs. the new model design, it would be helpful to compare baseline models trained on MER-Caption+ against AffectGPT trained on MER-Caption+. This would clarify whether the performance boost comes primarily from the dataset or the model architecture. **A1:** **In our experimental design, we systematically validate the effectiveness of both the new dataset and model using the control-of-variables method.** Specifically, in Table 3, we demonstrate the impact of our MER-Caption+ dataset by maintaining identical model architecture while varying only the training data. In Table 4, we verify the necessity of AffectGPT by keeping the training data constant while modifying only the model structure. **These carefully designed ablation studies can already verify the effectiveness of the new dataset and the new model.** **Q2:** Dataset Coverage and Diversity: The paper does not specify the scenarios included in the dataset (e.g., daily conversations, news reports, or movie reviews). Additionally, it is unclear whether the dataset primarily focuses on first-person speaking or also includes multi-person videos, which could affect its generalizability. **A2:** Thanks for your suggestion, and we will incorporate the following additional information in our revised manuscript. In this paper, we intentionally focus on single-person videos, as this allows us to eliminate interference from other speakers and reduce task difficulty. Multi-person MER belongs to another research topic and will be discussed in our future work. | Category | Dataset | Chosen Set | # Samples | Label Description | Data Source | |-----------------------|--------------|-------------|----------|--------------------------------------------|---------------------------------| | ​**Fine-grained Emotion** | OV-MERD+ | All | 532 | Unfixed categories and diverse number of labels per sample | movies, TV series | | ​**Basic Emotion** | MER2023 | MER-MULTI | 411 | Most likely label among six candidates | movies, TV series | | | MER2024 | MER-SEMI | 1,169 | Most likely label among six candidates | movies, TV series | | | IEMOCAP | Session5 | 1,241 | Most likely label among four candidates | actor's performance | | | MELD | Test | 2,610 | Most likely label among seven candidates | "Friends" TV series | | ​**Sentiment Analysis** | CMU-MOSI | Test | 686 | Sentiment intensity, ranging from [-3, 3] | opinion videos in YouTube | | | CMU-MOSEI | Test | 4,659 | Sentiment intensity, ranging from [-3, 3] | opinion videos in YouTube | | | CH-SIMS | Test | 457 | Sentiment intensity, ranging from [-1, 1] | movies, TV series, and shows | | | CH-SIMS v2 | Test | 1,034 | Sentiment intensity, ranging from [-1, 1] | movies, TV series, and shows | **Q3:** Impact of Frame Signals on Performance: The results suggest that using face signals alone outperforms frame signals, raising the question of whether frame signals introduce more noise. It would be valuable to explore whether this issue can be mitigated through data cleaning or improved visual signal alignment with other modalities. **A3:** Thanks for your comment. We agree that using frame signals may introduce additional noise, resulting in performance degradation. **Your suggestions for further verification are insightful, and we plan to explore these aspects through additional experiments in future work.**
null
null
null
null
null
null
TypyBench: Evaluating LLM Type Inference for Untyped Python Repositories
Accept (poster)
Summary: The paper introduces TYPYBENCH, a benchmark for evaluating the capability of Large Language Models (LLMs) in type inference for Python repositories without explicit type annotations. It defines two novel metrics: TYPESIM, which captures semantic relationships between predicted and ground truth types using syntax-level features, and TYPECHECK, which assesses type consistency across entire codebases. The benchmark includes 50 well-typed Python repositories from diverse domains. Experimental results show that while LLMs achieve reasonable type similarity, they struggle with complex nested types. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Not applicable, the paper is about benchmarks and there are no theoretical claims or proofs. Experimental Designs Or Analyses: I checked how TYPESIM works in detail and appreciate the authors’ efforts to move beyond exact type matching by incorporating semantic similarity. However, while TYPESIM aims to capture the semantic relationships between types, it still primarily relies on syntax-level comparisons and focuses on shared fields and attributes between types. This raises concerns about whether it fully captures deeper semantic equivalences beyond structural similarity. Additionally, the study acknowledges that LLMs are evaluated file by file due to token size limitations. It raises a concern that the capability of LLMs in type inference could be undermined due to the limited size of context windows. While I acknowledge that the experimental results are persuasive, it would be great if the authors could discuss using techniques like using techniques as program slicing to feed related context of a type to LLMs. Supplementary Material: Appendix A, B, and C. Relation To Broader Scientific Literature: The paper mainly introduces a benchmark for Python's type analysis on the repository level, which could highlight future research in studying the how LLMs can help type inference of industry length. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: ### Strengths - The type analysis is performed across both built-in, user-defined, and generic types. - The benchmark is targeted at the type inference on the repository level, across 50 well-typed Python repositories from diverse domains. - The evaluation is comprehensive, including multiple state-of-the-art LLMs. ### Weaknesses - TYPESIM still relies on syntax-level comparisons. - Lack of comparison between baseline static type inference tools but involving mypy only. Other Comments Or Suggestions: N/A. Questions For Authors: - LLMs are evaluated file by file. Can the authors discuss the potential limitations of using a single file for type inference and possible future directions in improving LLM's type inference via giving more constructive contexts? - On the lack of comparison with static type inference tools. From my knowledge, there are other tools than Mypy such as Pyre developed by Meta are doing Python's type analysis. While I am not suggesting additional experiments, could the authors discuss the limitations of relying solely on Mypy as the baseline? How could alternative static analysis tools influence the evaluation results? - TYPESIM only relies on the similarity of syntax features. While TYPESIM effectively relaxes exact type matching, it primarily relies on syntax-level features, such as attribute comparisons. Have you considered incorporating other features related to code semantics? For example, the similarity between the control flow graphs of two functions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable feedback and suggestions. We would like to address your remaining concerns with the following responses. We will improve the manuscript accordingly to address them. > LLMs are evaluated file by file. Can the authors discuss the potential limitations of using a single file for type inference and possible future directions in improving LLM's type inference via giving more constructive contexts? Thanks for the suggestion. We did a preliminary case study and found that a common cause of type inconsistency comes from missing context (defined in other files or imported from third-party libraries). The patterns include but not limited to: 1. Missing base class ```python class A(B): def func(vars): super().func(vars) ``` 2. Missing function information ```python def func(vars): return another_func(vars) ``` 3. Missing decorator information ```python @some_decorator def func(vars): … ``` The most straightforward way is to provide the whole repository as the context, but limited by LLM's context window and might hurt TypeSim as shown in Table 4. An alternative way is to copy-paste the necessary context following the `import` statements but requires engineering efforts. We tried to provide such context manually for the examples we found, and the inconsistency issues were resolved after the context was provided. Another potential method could be iteratively refining the stub files while using all stub files (and the implementation of the current file) as context. This would require much less context window since the stub files are relatively small. > On the lack of comparison with static type inference tools. From my knowledge, there are other tools than Mypy such as Pyre developed by Meta are doing Python's type analysis. While I am not suggesting additional experiments, could the authors discuss the limitations of relying solely on Mypy as the baseline? How could alternative static analysis tools influence the evaluation results? Thanks for bringing up this point. While Mypy is imperfect and could produce false positives or incorrect errors, it is still a popular choice for software engineering best practices to improve code quality. Evaluation results may be different when switching to another static type inference tool like Pyre, but since the majority of static checks should be similar, the change in the evaluation results should be small. TypeCheck metric could switch to another static type inference tool if there is a better and well-adopted one. > TYPESIM only relies on the similarity of syntax features. While TYPESIM effectively relaxes exact type matching, it primarily relies on syntax-level features, such as attribute comparisons. Have you considered incorporating other features related to code semantics? For example, the similarity between the control flow graphs of two functions. Thanks for your question. If our understanding is correct, you are suggesting not only comparing whether two types have the same functions, but also examining the code semantics, like how similar are their implementations for the same function? We think it is an interesting direction for future work (especially for complex user-defined types), but current TypeSim already reasonably captures the similarity between types with convenient and robust calculations. | Original | Predicted | TypeSim | |-|-|-| | `list[Any] \| None` | `list[str] \| None`| 0.75| | `dict[Any, Any] \| None` | `dict[str, Any] \| None`| 0.875 | | `dict[str, tuple[Any, ...]] \| None` | `dict[str, tuple[int, ...]] \| None`| 0.9375| | `dict[str, list[int]]`| `dict[str, Union[tuple[int, ...], Any]]`| 0.8385| | `float \| np.ndarray[Any, Any]`| `np.ndarray[Any, Any]`| 0.5| | `pathlib.Path \| None`| `str \| pathlib.Path \| None`| 0.6667 | Above is a list of type pairs and their TypeSim score. It can be seen that the TypeSim score reflects the compatibility of two types, which roughly matches human's impression on the type similarity.
Summary: In this work, the authors evaluate the ability of LLMs to perform type inference in Python codebases. They introduce two type inference evaluation metrics: (1) TypeSim, which extends prior work focused on exact matching to consider semantic similarity between LLM-inferred vs. human-annotated types, and (2) TypeCheck, which assess repository-level type consistency (e.g., detection of incompatible return types and invalid argument types). They also introduce a type inference benchmark dataset, TypyBench, which consists of a curated set of 50 Python repos. Their empirical results indicate that LLMs are better at local type inference (as indicated by TypeSim scores) than global consistency (as indicated by the TypeCheck scores). Claims And Evidence: The authors' empirical results largely support their claims and validate the benefits of the metrics they propose. Methods And Evaluation Criteria: The metrics that the authors introduce (e.g., TypeSim and TypeCheck) and the ways in which they operationalize and evaluate them seem appropriate and useful. Theoretical Claims: N/A, the authors’ claims are largely empirical. Experimental Designs Or Analyses: I read their empirical evaluation section, but did not take further steps to verify/replicate the soundness of their results. Supplementary Material: I skimmed the appendix but did not review all parts in detail. Relation To Broader Scientific Literature: This paper is related to prior work on type inference methods, including static/dynamic analysis, ML-based approaches that leverage structured and/or unstructured representations of code, and existing type inference and programming-related benchmark datasets (e.g., for code generation, code completion, debugging, etc.). Essential References Not Discussed: I am not aware of any essential but omitted references. Other Strengths And Weaknesses: Strengths - The paper is well-written and well-motivated. - Consideration of the semantic similarity of type annotations meaningfully improves upon evaluation based on exact matching. - The authors conduct empirical analysis on a representative set of LLMs and a well-curated benchmark dataset. Weaknesses - It would be helpful to provide error analysis/insights into how/why LLMs struggle with code consistency. This can help to inform mitigation efforts that do not require passing the whole repo in as context. Other Comments Or Suggestions: - It might be helpful to provide technical definitions for consistency and coherence when these terms are used early in the paper. - If possible, it would be helpful to place algorithm blocks after the text that describes them, rather than before. Questions For Authors: 1. Have you considered alternative methods for assessing type/argument similarity? (eg, embedding-based approaches?) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your valuable feedback and suggestions. We would like to address your remaining concerns with the following responses. We will improve the manuscript accordingly to address them. > Have you considered alternative methods for assessing type/argument similarity? (eg, embedding-based approaches?) Our attribute-based method (L198) can also be viewed as an embedding-based approach. For example, if we regard each attribute as a column (1 means has and 0 means not), then each type gets an embedding representation, then TypeSim is naturally defined as the Jaccard Similarity over such a representation. For example: Type | __iter__ | append | __add__ | … | |-|-|-|-|-| int | 0 | 0 | 1| …| list | 1 | 1 | 1| …| Sequence | 1 | 0 | 0| …| Such embedding representation can naturally work for user-defined types and the similarity is based on the functionality of types rather than the semantic meaning of the name of the types. > It would be helpful to provide error analysis/insights into how/why LLMs struggle with code consistency. This can help to inform mitigation efforts that do not require passing the whole repo in as context. Thanks for the suggestion. We did a preliminary case study and found that a common cause of type inconsistency comes from missing context (defined in other files or imported from third-party libraries). The patterns include but not limited to: 1. Missing base class ```python class A(B): def func(vars): super().func(vars) ``` 2. Missing function information ```python def func(vars): return another_func(vars) ``` 3. Missing decorator information ```python @some_decorator def func(vars): … ``` The most straightforward way is to provide the whole repository as the context, but limited by LLM's context window and might hurt TypeSim as shown in Table 4. An alternative way is to copy-paste the necessary context following the `import` statements but requires engineering efforts. We tried to provide such context manually for the examples we found, and the inconsistency issues were resolved after the context was provided. Another potential method could be iteratively refining the stub files while using all stub files (and the implementation of the current file) as context. This would require much less context window since the stub files are relatively small. > It might be helpful to provide technical definitions for consistency and coherence when these terms are used early in the paper. > If possible, it would be helpful to place algorithm blocks after the text that describes them, rather than before. Thanks for both suggestions. We will improve the manuscript accordingly.
Summary: The paper introduces TypyBench, a benchmark aimed at evaluating large language models (LLMs) on their ability to perform type inference for Python code. Recognizing limitations in existing benchmarks and exact-matching metrics, the authors propose two novel evaluation measures: TypeSim, which assesses semantic similarity between predicted and ground truth types, and TypeCheck, which evaluates type consistency across repositories using static type checking. Through experiments on a curated dataset of 50 Python repositories, the study reveals that while state-of-the-art LLMs can achieve strong semantic similarity scores, they struggle with maintaining consistent type annotations at a repository scale, especially for complex or nested types. ## update after rebuttal I appreciate the authors' responses, which conceptually addressed my concerns, though open-sourcing the whole benchmark at this moment will be more appreciated than 1-2 examples in the rebuttal to convince me regarding the benchmark design and quality. I am raising my score to support this paper's acceptance for now, though I am still not fully convinced without seeing the real examples in the benchmark. Claims And Evidence: Claims in this paper are mostly supported by empirical evidence. Methods And Evaluation Criteria: __Need to Justify Why We Need LLMs to Do Type Inference__ While this paper presents a well-structured benchmark and introduces valuable evaluation metrics (TypeSim and TypeCheck) for assessing LLMs' performance on type inference, it lacks a compelling justification for the necessity of using LLMs in this context. Given the existence of mature and widely used symbolic tools such as Mypy, Pyright, and MonkeyType, the paper should more clearly articulate what specific limitations of symbolic approaches LLMs are intended to address. Without a thorough discussion of the practical advantages of LLMs—such as their ability to generalize to uncommon patterns, handle incomplete code, or infer types in low-resource settings—it remains unclear why LLM-based inference is worth pursuing over traditional methods. To strengthen the contribution, I encourage the authors to (1) provide a clear motivation for using LLMs for type inference, including a comparison of their theoretical or practical advantages over symbolic systems, and (2) include an empirical baseline where symbolic tools are evaluated on the same benchmark. Demonstrating that LLMs can outperform symbolic methods—either in terms of TypeSim, TypeCheck, or overall utility in real-world development scenarios—would substantially reinforce the paper’s significance and help clarify the role LLMs should play in the type inference landscape. __Usefulness of Applying TypyBench as a Mainstream Evaluation Benchmark__ Another key concern lies in the broader motivation for evaluating LLMs specifically on the task of type inference. As a benchmark, TypyBench would benefit from a clearer articulation of what general model capabilities type inference serves as a proxy for. Unlike tasks such as code generation or execution reasoning—where LLMs are uniquely positioned to outperform symbolic methods and improve downstream applications like self-debugging or automatic code generation —type inference is a narrow and well-defined task that symbolic tools already handle effectively in many practical settings. Without a strong argument for how improving LLMs' performance on type inference translates into broader gains in language model capabilities, it remains unclear why this task warrants dedicated benchmarking and why LLM developers should prioritize it as part of their evaluation pipeline. To enhance the benchmark’s relevance, the authors should consider framing type inference within a more holistic view of code understanding or software engineering tasks that LLMs uniquely enable. Alternatively, they could provide evidence that performance on type inference correlates with performance on broader tasks like code editing, completion, or static analysis augmentation—thereby justifying the benchmark as a meaningful diagnostic tool. Without this contextualization, the current scope risks appearing too narrow and decoupled from the more impactful capabilities that developers and practitioners typically seek in modern LLMs. __More Illustration Regarding the New Metrics__ While the proposed TypeSim and TypeCheck metrics are novel and address some limitations of exact match evaluation, their design and practical significance could benefit from further clarification. TypeSim, in particular, introduces a similarity-based measure that may capture more semantic nuance, but it’s not entirely clear how well it aligns with human judgment of correctness or practical utility in real-world development workflows. Similarly, TypeCheck measures consistency via mypy errors, but it would be helpful to better justify why this is a comprehensive proxy for type quality. I encourage the authors to provide more rationale or empirical validation for these metrics—perhaps through user studies, ablation analysis, or correlation with downstream developer effort—to strengthen confidence in their effectiveness and generalizability. Theoretical Claims: N/A Experimental Designs Or Analyses: See "Methods And Evaluation Criteria". Supplementary Material: There is no supplementary material provided. Relation To Broader Scientific Literature: This is related to program analysis literature in PL/SE domain. Essential References Not Discussed: References are well discussed. Other Strengths And Weaknesses: See "Methods And Evaluation Criteria". Other Comments Or Suggestions: See "Methods And Evaluation Criteria". Questions For Authors: See "Methods And Evaluation Criteria". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable feedback and suggestions. We would like to address your remaining concerns with the following responses. We will improve the manuscript accordingly to address them. ### Need to Justify Why We Need LLMs to Do Type Inference As shown in [previous work](https://github.com/secure-software-engineering/TypeEvalPy) [1], LLMs already showed better type inference results compared with symbolic tools such as Pyright and [Jedi](http://jedi.readthedocs.io/). Moreover, code-completion tools like Github Copilot and Cursor are making progressively better type hints completions in Python. These motivated us to mainly focus on using LLMs for type inference. > the paper should more clearly articulate what specific limitations of symbolic approaches LLMs are intended to address Thanks for the suggestion. We use Jedi to illustrate the limitation of symbolic approaches. The original code snipet in the `black` repo is: ```python @dataclass class BracketTracker: """Keeps track of brackets on a line.""" depth: int = 0 bracket_match: dict[tuple[Depth, NodeType], Leaf] = field(default_factory=dict) delimiters: dict[LeafID, Priority] = field(default_factory=dict) previous: Optional[Leaf] = None _for_loop_depths: list[int] = field(default_factory=list) _lambda_argument_depths: list[int] = field(default_factory=list) invisible: list[Leaf] = field(default_factory=list) ``` The predicted types by *jedi* are: ```python @dataclass class BracketTracker: depth: int bracket_match: dict delimiters: dict previous: None _for_loop_depths: list _lambda_argument_depths: list invisible: list ``` The TypeSim score on `depth` is 1 and 0.5 for others. The overall score for this file is 0.6645. the predicted types by claude-3.5.sonnet. ```python class BracketTracker: depth: int bracket_match: Dict[tuple[int, int], Leaf] delimiters: Dict[int, int] previous: Optional[Leaf] _for_loop_depths: List[int] _lambda_argument_depths: List[int] invisible: List[Leaf] ``` The TypeSim scores for these variables are all 1.0 and the overall score of the file is 0.8906. As shown in the example above, the type predicted by Jedi lost fine-grained information, while Claude recovered most of that with a much higher similarity compared with the original codes. ### Usefulness of Applying TypyBench as a Mainstream Evaluation Benchmark > what general model capabilities type inference serves as a proxy for Thanks for the suggestion. TypyBench mainly introduces the following challenges (see Table 1 for concrete numbers): 1. Code understanding in long input context length (repo-level, could be >1M tokens) 2. Long completion length (each function argument and return value need to be typed) 3. Consistency among lots of outputs (i.e., cases, the predicted types) As shown in Figure 4 and Table 2, the relative performance of tested LLMs (claude-3.5, gpt-4o, grok2, and deepseek-v3 were top-tier at that time) generally matches the relative performance on other coding tasks like the code completion in [LiveCodeBench](https://livecodebench.github.io/leaderboard.html) (claude-3.5: 32, gpt-4o: 30, gpt-4o-mini: 27.7, date range 8/1/2024 - 2/1/2025), indicating it is a meaningful diagnostic benchmark that pressure tests the code understanding ability of long-context LLMs. ### More Illustration Regarding the New Metrics > it’s not entirely clear how well it aligns with human judgment of correctness or practical utility in real-world development workflows. | Original | Predicted | TypeSim | |-|-|-| | `list[Any] \| None` | `list[str] \| None`| 0.75| | `dict[Any, Any] \| None` | `dict[str, Any] \| None`| 0.875 | | `dict[str, tuple[Any, ...]] \| None` | `dict[str, tuple[int, ...]] \| None`| 0.9375| | `dict[str, list[int]]`| `dict[str, Union[tuple[int, ...], Any]]`| 0.8385| | `float \| np.ndarray[Any, Any]`| `np.ndarray[Any, Any]`| 0.5| | `pathlib.Path \| None`| `str \| pathlib.Path \| None`| 0.6667 | Above is a list of type pairs that are similar but not exactly the same. It can be seen that the TypeSim score reasonably exhibits the similarity between the two types. > it would be helpful to better justify why this is a comprehensive proxy for type quality. Mypy is a widely adopted best practice in software engineering to improve code quality and detect potential bugs with static checks. For many well-maintained Python open-source repositories, the number of Mypy errors should be almost 0. Therefore, the number of Mypy errors reflects the developer's efforts to fix the inconsistency after using type inference methods. [1] Shivarpatna Venkatesh, Ashwin Prasad, et al. "Typeevalpy: A micro-benchmarking framework for python type inference tools." Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings. 2024.
Summary: Summary: The paper introduces TYPYBENCH, a benchmark designed to evaluate the type inference capabilities of large language models (LLMs) across entire Python repositories. The benchmark features two novel metrics: TYPESIM, which measures the semantic similarity between predicted and ground truth types, and TYPECHECK, which evaluates type consistency across codebases. The authors evaluate various LLMs on a dataset of 50 high-quality Python repositories and find that while LLMs achieve reasonable TYPESIM scores, they struggle with complex nested types and exhibit significant type consistency errors. The findings suggest that future research should prioritize improving repository-level type consistency over type similarity. TYPYBENCH provides a foundation for this new research direction, offering insights into model performance across different type complexities and usage contexts. Claims And Evidence: This paper presents clear and compelling evidence. For instance, Section 2.1 cites relevant references to conventional methods, learning-based methods, and LLM-based methods. Methods And Evaluation Criteria: As far as I know, there are currently no approaches that rely entirely on large language models for type inference, primarily due to concerns around soundness. Given this, why do we need evaluation metrics that are specifically designed for LLMs? For TypeSim, which aims to evaluate the semantic similarity between predicted and ground-truth types — how frequently does such semantic similarity actually occur at the function level in real-world code? In other words, what proportion of function-level type annotations can be considered semantically interchangeable? For TypeCheck, which evaluates consistency of types within a codebase — what is the proportion of types in real-world repositories that exhibit this kind of internal consistency? Without quantitative insights into these questions, it's hard to assess how broadly useful or necessary these two criteria are. Theoretical Claims: The paper does not provide specific theoretical proofs, as it is not a theoretical paper. Experimental Designs Or Analyses: The experimental design appears reasonable, as it includes both the Main Evaluation Analysis and the Factors Analysis. Supplementary Material: I have reviewed the supplementary material, specifically Appendix C.1. The prompts used for single-file type inference and full-repository context type inference appear to be reasonable and well-designed. Relation To Broader Scientific Literature: This paper proposes two metrics for LLM-based type inference: TypeSim and TypeCheck. Essential References Not Discussed: The related works are sufficiently discussed in this paper. Other Strengths And Weaknesses: Strengths: + The dataset curation in this paper is a valuable contribution. Overall, it is necessary and well-justified, especially considering that the widely-used dataset currently available is ManyTypes4Py. + Clear structure. The structure of this paper is clear. Weaknesses: - The motivation behind this work is worth discussing. As far as I know, there already exist a wide range of metrics for evaluating type inference, such as exact match, match to parameter, and others as mentioned in your paper. So why do we need a dedicated metric specifically for LLMs? This point is not clearly addressed in the introduction. Moreover, you mention in the introduction that types like List and Sequence can be used interchangeably. However, this comes across as a special-case observation rather than a generalizable insight. Have you conducted any statistical analysis to quantify how often such interchangeable usage occurs in real-world codebases? Moreover, can other type pairs like str and bool also be used interchangeably? If so, how frequent are such cases? In addition, match to parameter seems to perform better compared to the proposed TypeSim, as it can match the outer type structure more effectively.Can I then understand TypeSim as a suboptimal alternative to match to parameter, rather than a fundamentally better metric? -The methodology behind the proposed criteria seems overly simplistic. For TypeSim, the base type similarity is computed using the Jaccard index, followed by a basic additive scheme to estimate structural similarity.But is this really how large language models perform type inference — through such naive calculations? This method appears to have little to do with the actual mechanisms or capabilities of LLMs, and does not take into account any of the unique characteristics or behaviors of language models. This paper uses the number of mypy errors as a measure of Type Consistency, but this approach is rather naive. mypy itself can produce false positives or incorrect error reports, meaning that the evaluation may not accurately reflect true inconsistencies. - As a benchmark metric, the current selection of foundation models is insufficient. Notably, the evaluation lacks comparisons with more diverse models such as GPT-4, CodeLlama, StarCoder, CodeT5, and its variants. Including these models would provide a more comprehensive and convincing assessment of the proposed metrics. -I could not find any links to the open-source code or dataset in this paper. Providing access to the implementation and data is essential for reproducibility and for allowing others to validate or build upon this work. Other Comments Or Suggestions: None Questions For Authors: 1.why do we need a dedicated metric specifically for LLMs? 2.Have you conducted any statistical analysis to quantify how often such interchangeable usage occurs in real-world codebases? 3.Can I then understand TypeSim as a suboptimal alternative to match to parameter, rather than a fundamentally better metric? 4.Is it reasonable to evaluate the type inference capabilities of LLMs using such simple, naive calculations? Are there more suitable evaluation methods that better align with the way LLMs actually reason about code? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your valuable feedback and suggestions. We would like to clarify the metrics design and address your remaining concerns with the following responses. We will improve the manuscript accordingly to address them. > The motivation behind this work is worth discussing. … Why do we need evaluation metrics that are specifically designed for LLMs? [Q1] First, we want to clarify that our metrics are not only designed for LLMs but all predictions made by any type inference method (including the program-based ones) can be evaluated using these metrics. These metrics are calculated over a set of type predictions. **For TypeSim**, as stated in the Introduction (L38), exact matching fails to capture important semantic relationships between types. [Q3] For *match up to parametric type*, it is a special case of our TypeSim metric as formulated in L187: $S(T, T')=\alpha~s(root, root')+(1-\alpha)S_{list}(args(T),args(T'))$ where in TypeSim $\alpha=0.5$ and $s$ is Jaccard on attributes, and in *match up to parametric type* $\alpha=1$ and $s$ is exact matching (equivalent to taking floor operation on the result of Jaccard Index). Therefore, TypeSim is a more general similarity metric than *exact match* and *match up to parametric type*. For example, for a target type `list[str]` and the prediction is `list`, exact matching gives a score of 0 would be too harsh, and *match up to parametric type* gives a score of 1 is too benign. TypeSim gives a score of $\alpha=0.5$ reflects the similarity between two types. [Q2] The main goal of the TypeSim metric is not limited to detecting the types that could be used interchangeably (which are illustrative examples) but also providing a better measurement for the semantic similarity between complex types by considering their functional attributes (Section 4.1.1 L209-219) and the structure (Section 4.1.2) within the types. As shown in the experiments (Section 6.3.1), TypeSim provides a more nuanced semantic-based evaluation compared with exact matching, especially for more nested types (e.g., depth > 3). |Original|Predicted |TypeSim| |-|-|-| |`list[Any] \| None`|`list[str] \| None`|0.75| |`dict[Any, Any] \| None`|`dict[str, Any] \| None`|0.875| |`dict[str, tuple[Any, ...]] \| None`|`dict[str, tuple[int, ...]] \| None`| 0.9375| |`dict[str, list[int]]`|`dict[str, Union[tuple[int, ...], Any]]`|0.8385| |`float \| np.ndarray[Any, Any]`|`np.ndarray[Any, Any]`|0.5| |`pathlib.Path \| None`|`str \| pathlib.Path \| None`| 0.6667| Above is a list of similar type pairs. It can be seen that the TypeSim score reasonably exhibits the similarity between the two types. **For TypeCheck**, though Mypy is not a perfect type checker and could produce false positives or incorrect error reports, it is currently a well-adopted best practice for software engineering. Repositories passing Mypy check (or having fewer Mypy errors) tend to have fewer bugs in practice, so we introduce this as a good starting point for evaluating type inference results. We will discuss this limitation and leave it as future work to find a better checking mechanism. We hope these clarifications resolve your questions. Please let us know if you have further questions. > The evaluation lacks comparisons with more diverse models such as GPT-4, CodeLlama, StarCoder, CodeT5, and its variants. We did not evaluate these LLMs since they are already outdated models compared with the SOTA LLMs we tested (e.g., GPT-4o, Claude-3.5-sonnet). Moreover, SOTA general-purpose models (e.g., GPT-4o) have already surpassed code-focused models in coding tasks. Nonetheless, we tested CodeLlama on the test sets as suggested, which gives similar results as Llama-3-8B: |Model|TypeCheck|TypeSim|TypeSim wo missing|Missing rate| |-|-|-|-|-| |Llama-3-8b|44.0|0.396|0.747|0.470| |CodeLlama|25.0*|0.3558|0.735|0.516| *: only count on private-gpt **Other questions**, > [Q4] Is it reasonable to evaluate the type inference capabilities of LLMs using such simple, naive calculations? Are there more suitable evaluation methods that better align with the way LLMs actually reason about code? We’d like to clarify that our proposed metrics — TypeSim and TypeCheck — are not designed to mimic or replicate how LLMs internally reason about code or types. Instead, they are metrics measuring the semantic quality and repository-wide consistency of predicted types (regardless of how those predictions are produced, by LLMs or symbolic tools). TypeSim provides an interpretable view of type prediction quality, especially when predictions are partially correct. Its formulation using Jaccard similarity and structural decomposition is a pragmatic and interpretable approach that aligns with how types are composed and used in practice. > I could not find any links to the open-source code or dataset in this paper. We will open-source the code and dataset when published.
null
null
null
null
null
null
Strategic Planning: A Top-Down Approach to Option Generation
Accept (poster)
Summary: This paper proposes a top-down learning method for Reinforcement Learning (RL). The method leverages a Large Language Model (LLM) to decompose a complex task from high-level goals into fine-grained plans, considering specificity, value and feasibility. Following decomposition, these plans are transformed into reward signals through reward shaping for RL training. The paper demonstrates the effectiveness of using rewards derived from LLM decomposition. In experiments, the proposed method is evaluated on the Crafter environment, and the results show that it outperforms conventional RL methods. Claims And Evidence: This paper claims that bottom-up approaches learn low-level actions and improve through extensive trial and error, and argues that LLM-augmented RL methods (e.g., Hierarchical RL, LLM-as-policy, Reward shaping) also inherit the drawbacks associated with the bottom-up paradigm. However, these methods can also accelerate the convergence of RL agents to optimal solutions by leveraging language priors, which reduce the need for large-scale trial and error. Consequently, the primary advantage of top-down approaches remains unclear, and the paper lacks experimental comparisons between its proposed method and other LLM-augmented RL techniques. Methods And Evaluation Criteria: The evaluation benchmark is limited, consisting of only two tasks, "Hunt" and "Breed," which are not sufficiently representative. Theoretical Claims: I have checked theorem 2.1. Experimental Designs Or Analyses: The baselines are lacking. In Sections 1-3, this paper discusses the advantages of top-down approaches compared to bottom-up ones that are also augmented by LLMs (e.g., LLM as policy, hierarchical RL). However, in the experiments, the authors only compared their method with traditional RL approaches, such as PPO, and did not conduct comparisons with bottom-up approaches that also use LLMs. This omission makes it difficult to demonstrate the superiority of the top-down approach in scenarios where LLMs are introduced in both paradigms. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: This paper enhances RL learning using an LLM following a top-down paradigm. While previous research has also explored LLM-augmented RL, these studies typically adopt a bottom-up approach, which often requires extensive trial-and-error. Essential References Not Discussed: None. Other Strengths And Weaknesses: ** Strengths** 1. This paper introduces a top-down paradigm for RL with LLMs, offering a novel perspective on task decomposition and RL learning. 2. The paper proposes a flexible tree structure for task decomposition, featuring two node types – approach nodes and plan nodes – and three selection metrics. Other Comments Or Suggestions: None. Questions For Authors: 1. In Section 4.3, how is the entropy H calculated using LLM? 2. In Section 4.3, how is the value v calculated? Is J(·) obtained through rollout or from the environment’s reward process? 3. In Section 4.3, what does the probability in feasibility refer to? 4. In lines 152-153, it is stated that the entropy H needs to be lower than $H_{max}$ to satisfy being sufficiently specific. In Section 4.3, how does the calculation of specificity ensure this requirement is met? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful review and for recognizing the novel perspective. We appreciate the reviewer's concern about lacking comparisons between our top-down approach and bottom-up approaches that also leverage LLMs. This is a valid point, and we have now expanded our evaluation to include **two additional strong RL baselines (DreamerV3 [1] and EDE [2])** as well as **two LLM-based bottom-up approaches (LLM-as-policy and Eureka [3])** to provide a more comprehensive assessment of our method's effectiveness. ## Extension to other RL baselines We've added **DreamerV3** (50M parameters) [1] and **Exploration via Distributional Ensemble (EDE)** [2], both highly ranked on the original Crafter leaderboard. **DreamerV3** achieves slightly better performance than PPO (0.15 ± 1.39, N=4) but remains below our Strategist + PPO framework. [Preliminary results](https://imgur.com/a/NuBI80H) indicate that **EDE** reaches high reward (~6 avg after 2M training steps) by initially discovering a hunting strategy and subsequently learning survival skills to lengthen episodes. Critically, our Strategist framework can be combined with EDE, enabling it to be steered toward distinct strategies such as hunting and breeding. While the hunting strategy performs comparably to early EDE stages, the breeding strategy—which was overlooked by vanilla EDE—enables a **2× reward improvement over EDE's final performance**. This demonstrates that our approach is broadly applicable across different RL algorithms and can enhance the exploration capabilities of existing methods. ## New LLM-based baseline experiments We implemented an **LLM-as-policy** approach (shown to be effective in prior works such as Voyager and Reflexion), using GPT-4o-mini for direct action selection at each timestep. Notably, this is the same model used to annotate states in the Strategist’s reward shaping procedure to ensure a consistent LLM backbone across both baselines and our main method. This approach achieved 0.43 ± 1.01 reward (N=15), substantially underperforming our Strategist framework combined with PPO or EDE agents while being computationally expensive due to per-step inference requirements. Additionally, we evaluated **Eureka** [3] with GPT-4o, a state-of-the-art bottom-up LLM-based iterative reward discovery method. In our preliminary experiments with an evolutionary search (population=3, depth=3), Eureka's designed rewards didn't meaningfully outperform the environment reward, yielding results equivalent to baseline PPO while requiring a higher exploration budget. These comparisons directly demonstrate the advantages of our top-down strategic approach over bottom-up LLM-augmented alternatives in terms of both effectiveness and computational efficiency. ## Distinction between bottom-up and top-down methods We appreciate the clarification requested regarding the distinction between top-down and bottom-up methods. Indeed, we agree that bottom-up approaches augmented with LLMs can significantly improve sample complexity, and we emphasize that our top-down method is not inherently mutually exclusive but rather orthogonal to these bottom-up methods. Consequently, direct empirical comparisons were initially omitted as our goal was not to position these approaches in competition, but to present a complementary framework. For instance, methods using an LLM as a policy or hierarchical RL can be readily integrated alongside our Strategist framework to potentially further improve sample efficiency, as discussed in our future work section. ## Additional comments and revisions We acknowledge the importance of the detailed questions raised about Section 4.3, and will address these clearly in a newly added appendix section, briefly summarized here due to character constraints: - Regarding feasibility, this probability represents how much of the probability mass of the current strategy aligns with policies achievable by the agent’s learning hypothesis class. - Initially, attributes like specificity, feasibility, and value are estimated by the LLM based on its prior knowledge. As demonstrated in Section 5.3, these estimates can be dynamically updated with actual environment interactions, opening possibilities for future multi-armed bandit-inspired strategy adjustments during learning, which we encourage in our future work discussion. Thank you again for your valuable insights and constructive suggestions. ## References [1] Hafner, Danijar, et al. "Mastering diverse domains through world models.". [2] Jiang, Yiding, J. Zico Kolter, and Roberta Raileanu. "Uncertainty-driven exploration for generalization in reinforcement learning.". [3] Ma, Yecheng Jason, et al. "Eureka: Human-Level Reward Design via Coding Large Language Models.".
Summary: The paper proposed Strategic Planning, a top-down approach for decomposing complicated reinforcement learning (RL) task into natural language-described sub-tasks. The paper also designs a reward shaping methodology that translates these strategies expressed in natural language into quantitative feedback for RL methods. Detailed implementation is introduced with theoretical justifications. The overall pipeline has been tested on Crafter environment to show its effectiveness. Claims And Evidence: See below. Methods And Evaluation Criteria: Yes, it makes sense. Theoretical Claims: No problem with me. Experimental Designs Or Analyses: Yes, I have checked. See below. Supplementary Material: I roughly checked the appendix. Relation To Broader Scientific Literature: The current paper discussed using LLM and human prompts to make more effective hierarchy of planning and design auxillary reward functions to help solving RL problems. The high-level idea of using LLM to make plans and design options are not new (e.g., SayCan), the using LLM to design reward functions is not new (e.g., Eureka). However, the current paper is novel in terms its detailed implementation and methodology. Essential References Not Discussed: Many classic hierarchical RL studies should be discussed. E.g., Feudal RL actually shares similar high-level idea of top-down hierarchy. - Dayan P, Hinton G E. Feudal reinforcement learning[J]. Advances in neural information processing systems, 1992, 5. - Dietterich T G. Hierarchical reinforcement learning with the MAXQ value function decomposition[J]. Journal of artificial intelligence research, 2000, 13: 227-303. - Vezhnevets A S, Osindero S, Schaul T, et al. Feudal networks for hierarchical reinforcement learning[C]//International conference on machine learning. PMLR, 2017: 3540-3549. Other Strengths And Weaknesses: ### [Strengths] - Ideas are theoretically justified. - Figures are nicely made. - The writting is easy to follow. ### [Weakness] - The largest concern I've got is the to what degree the proposed method can be effective in general. If I understood correcly, Strategic Planning requires humans to input a textual strategy, e.g., "collect much meat as much as possible". I am not sure how much this can generalize to other tasks. How do you know a strategy is good in general? - The comprehensiveness of empirical results is yet to be improved. That is to say, it is expected to have more types of tasks and environments to be evaluated, and more strong baselines (rather than vanilla PPO, but those also used LLMs, i.e., SayCan and Eureka) to be compared with, to meet the quality bar of ICML. - Using Minecraft-Collecting Meat as an illustrative example may not be a proper choice since not everyone is familier with it. - The introduction is broken into several parts with respective subtitles. This may prevent the reading flow and make readers feel discountinual. - Many classic works on hierarchical RL are ignored, see "Essential References Not Discussed". Other Comments Or Suggestions: - Figrue 4 contains an 80% percentile confidence interval, which is not a common choice (95% or 68% is more common), is there any reason to do so? - The Crafter environment was modified by the authors in this study? Could the authors explain why? Can Strategic Planning also work in the non-modified version? Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank you for your detailed and insightful feedback. ## Additional RL baselines Following your suggestion, we have added **two competitive RL baselines**: DreamerV3 [1] and Exploration via Distributional Ensemble (EDE) [2], both high ranking on the original Crafter leaderboard. DreamerV3 achieves slightly better performance than PPO (0.15 ± 1.39 (std, N=4)) but remains below our Strategist + PPO framework. [Preliminary results](https://imgur.com/a/NuBI80H) indicate that EDE reaches high reward on Crafter-Medium (~6 avg) by initially discovering a hunting strategy and subsequently learning survival skills to lengthen episodes further. However, our Strategist approach is orthogonal to the underlying RL agent: combined with EDE, the Breeding strategy—which vanilla EDE does not exploit—results in a **2× increase in rewards compared to EDE’s final performance**. This highlights the value of our top-down approach in uncovering effective, alternative strategies that traditional bottom-up methods may not discover. ## LLM baselines We agree it is important to compare our method to other LLM-based baselines. We have added **two additional LLM baselines**: LLM-as-Policy and Eureka [3]. First, we implemented an **LLM-as-Policy** baseline with GPT-4o-mini, which selects actions at each step directly. This baseline achieved only 0.43 ± 1.01 reward (N=15), significantly underperforming compared to our Strategist framework. Furthermore, its computational cost is high due to requiring inference at each timestep. Second, we compared our method to **Eureka**, an LLM-based approach designed for iterative reward discovery using GPT-4o. Eureka’s primary objective is finding high-quality reward functions, without aiming for sample efficiency. In our experiments, Eureka's evolutionary search (population size 3, depth 3) did not yield rewards meaningfully superior to the default environment reward, performing equivalently to baseline PPO despite utilizing greater exploration resources. For completeness, we will provide results with a deeper Eureka run in the revised manuscript. SayCan is designed for robotic tasks using predefined action primitives and a pretrained value function for feasibility evaluation. This setup does not transfer well to environments where the space of meaningful behaviors is broader and the choice of primitives is not natural, plus the existence of a pre-trained low-level actor is a strong requirement, rarely available. Instead, our method leverages LLMs for high-level strategy generation and uses lang. cond. RL to ground those strategies through interaction. In practice, approaches like SayCan and action-proposing LLMs are actually orthogonal to ours—they focus on low-level decision-making given a set of primitives, which could be combined with our high-level strategic planning and exploration. ## Additional comments - **Generalizability**: we clarify that one of the strategist's strengths is precisely its flexibility. Our method is environment-agnostic conditioned on the LLM being able to generate meaningful strategies given domain descriptions, which is reasonable for a broad range of tasks. Unlike other methods, the Strategist does not require pre-training, expert demonstrations or predefined sub-goals. - **Hierarchical RL**: Thank you for pointing out key prior work. We expanded Appendix D.3 to better situate our method within the hierarchical RL literature. In addition to citing Feudal RL (Dayan & Hinton, 1992), we now include discussions of MAXQ (Dietterich, 2000) and Feudal Networks (Vezhnevets et al., 2017), highlighting both differences and potential complementarities with our framework. - **Confidence intervals in Fig. 4**: We acknowledge the nonstandard use of the 80% interval. We now include 68% intervals in the appendix, noting that this change does not affect the qualitative conclusions. - **Crafter environment modifications**: We modified Crafter to better expose strategic trade-offs (e.g., hunting vs. breeding), making the evaluation of strategy discovery more explicit. Our method remains applicable to the original environment, but we found it less effective for highlighting the core contributions given its achievement-based reward. - **Minecraft-meat collection example**: To address accessibility, we added a real-world motivating example (hospital management) in the appendix, illustrating how our framework supports discovering diverse high-level strategies in complex decision-making settings. Thank you once again for your constructive critique and valuable recommendations. ## References [1] Hafner, Danijar, et al. "Mastering diverse domains through world models.". [2] Jiang, Yiding, J. Zico Kolter, and Roberta Raileanu. "Uncertainty-driven exploration for generalization in reinforcement learning.". [3] Ma, Yecheng Jason, et al. "Eureka: Human-level reward design via coding large language models." --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and conducting comparison with more alternative methods. While some part of my concern has been mitigated, a major concern remains that how the proposed method works in diverse tasks. I updated my score to 2 accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your engagement and for acknowledging our additional experiments and comparisons. We appreciate the opportunity to further clarify your concerns. Below, we address them in two parts: **(1) why our current experimental setting already demonstrates general applicability**, and **(2) we present new results on the original Crafter environment that further confirm the broader potential of our approach**. --- # Why Current Experimental Setting Demonstrates Broader Potential ## 1. General Conditions for Applicability Our method requires only two general conditions: - A capable LLM that can propose meaningful domain-relevant strategies. - An ability to interpret environment states into signals aligned with these strategies. We explicitly discuss in the manuscript that if an LLM fails to generate relevant strategies or to interpret observations adequately, our method naturally becomes ineffective. However, modern LLMs already exhibit strong reasoning capabilities and broad domain coverage, implying our framework is readily applicable to a diverse set of real-world tasks. ## 2. Insights Provided by Our Modified Crafter Tasks We deliberately designed our modified Crafter tasks to test multiple strategic dimensions, including short-horizon (hunting) and long-horizon (breeding) objectives, across two difficulty levels (Easy and Medium). This experimental design explicitly evaluates our Strategist’s capacity to guide agents toward complex, delayed-reward strategies that even state-of-the-art methods (e.g., DreamerV3, EDE) fail to uncover. Specifically, our experiments demonstrate: 1. **Superior Performance:** Strategist-guided agents consistently outperform the baselines. 2. **Strategy-Driven Exploration:** The Strategist directs the agent toward approaches that would otherwise remain unvisited. For instance, combining our method with EDE unlocked breeding behaviors EDE alone never found. 3. **Consistency Check:** We verify that the LLM’s preliminary feasibility/value estimates correlate with final RL outcomes. Together, 1 - 3 provide confidence that the improvements are neither spurious nor narrowly overfitted. Thus, our setup already indicates that the Strategist can (i) learn complex behaviors from strategies, (ii) significantly reduce reliance on lucky exploration, and (iii) integrate smoothly with existing RL algorithms. --- # Additional Experiments in the Original Crafter Environment To further confirm the generality of our approach, we evaluated Strategist+EDE on the (official) Crafter environment, which includes built-in achievement-based rewards. The Strategist proposed four distinct approaches to collecting achievements—Fight, Craft, Resources, and Needs—each prioritizing a subset of achievements (see the center top panel in the results, linked below). We then compared each to vanilla EDE. These preliminary results are promising and clearly illustrate our method’s broad applicability. Full results are available [here](https://imgur.com/a/fTj7F2G). 1. **Overall Achievement Gains (Fig.1):** Strategies focused on Fighting, Crafting, and Resources outperform baseline EDE by approximately +1.58, +1.48, and +1.35 achievements per episode, respectively (baseline average: 7.48). - The Needs strategy performs similarly to baseline, since survival-related achievements are easily reached through exploration. 2. **Strategy-Specific Milestones (Fig.2):** - Each strategy uniquely excels in tasks aligned with its intended focus: - **Fight** significantly accelerates achievements related to weapon crafting (stone swords) and combat tasks (e.g., defeating skeletons). - **Craft** builds stone swords and pickaxes, and together with **Resources**, they are the only ones to succeed in advanced resource collection tasks (e.g., gathering iron, see Fig. 3), all of which are undiscovered by baseline EDE. - Figures 2 illustrate these gains clearly, demonstrating rapid achievement of specialized goals that baseline methods rarely or never discover. 3. **Distinct Strategy Footprints (Fig.3):** - The heatmap reveals distinct and interpretable patterns of achievement completion that uniquely characterize each strategy, clearly emphasizing the specific tasks each was designed to target. Anecdotally, when provided with this table with blanked-out column names, an o1 model can accurately infer which achievement completion profiles correspond to each strategy, highlighting the clarity and effectiveness of policy steering. Collectively, these results demonstrate that our Strategist framework generalizes effectively beyond the initial domain modifications and across multiple tasks. By validating our method on an additional, widely used benchmark task using more open-ended strategies, we believe this directly addresses your concern regarding general applicability. --- Thank you again for your feedback and valuable suggestions.
Summary: The paper considers a top-down approach to hierarchical planning/option generation. The proposed Strategist Agent builds a tree structure that specifies alternative plans or sequential plans (approach and plan nodes), which are broken further, if necessary. The tree structure is generated by a sufficiently `strong' LLM with prompts describing the design of the Strategist Agent, and a sufficiently rich description of the domain. The LLM attaches a reward function and a feasibility evaluation to the nodes as well. The proposed algorithm is illustrated on the Crafter environment. There is an empirical comparison showing favorable performance compared to PPO. The paper also include a generic result that links the specificity and value of a policy to the sample complexity. Claims And Evidence: The paper claims empirical superiority over PPO on the task considered, which is not surprising for a task with sparse rewards and long scenarios. The approach seems interesting enough, but the evaluation is limited. Methods And Evaluation Criteria: The evaluation method is reasonable, but the set of baselines is very limited. Even in the absence of a larger pool of implemented/tested baselines (there should have been some that are suited to deal with sparse rewards and long episodes), the test environment could have been chosen such that the scoreboard of Crafter would have been a measure for comparison. Theoretical Claims: I parsed the proofs, but the theoretical result has a minor role in the paper anyway. Experimental Designs Or Analyses: I have looked closely to the empirical design, including the prompt details. Supplementary Material: I parsed the details of the Strategist Agent, the experiments and the proofs. Relation To Broader Scientific Literature: It is an interesting approach, very much relevant for training hierarchical agents using LLM. Essential References Not Discussed: The most relevant work is sufficiently discussed (especially in the supplementary material) Other Strengths And Weaknesses: The prompts and the generated plans are quite illustrative. Other Comments Or Suggestions: The authors argue that SayCan can is more limited because the options are specified bottom-up, while here it is completely top-down. While this is true, I would add that the two operated in a very different environments: the environment discussed here is essential a strategy game, while SayCan deals with robots. It is much easier to evaluate the feasibility and shape reward function in the former. LLMs would probably would have more difficulty in defining options in the latter. The prompts detailed in this paper are also quite rich in domain knowledge that helps defining the rewards. Questions For Authors: Would it be possible to compare Strategist Agent with the leader board for Reacher? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's feedback regarding our evaluation methodology. To address these concerns, we have substantially expanded our experimental comparisons to include **two additional RL baselines specifically designed for sparse reward settings and long episodes (DreamerV3 [1], EDE [2]), as well as two LLM-based approaches (LLM-as-policy, Eureka [3]).** ## Model-based and exploration-focused RL baselines We now include **DreamerV3** (50M parameters) [1], a leading model-based RL approach, which slightly outperforms PPO but still falls short of our Strategist+PPO combination. Additionally, we evaluate **EDE** (Exploration via Distributional Ensemble) [2], another high-performing method on the original Crafter leaderboard. [Preliminary results](https://imgur.com/a/NuBI80H) indicate that it achieves strong performance on Crafter-Medium (~6 avg reward after 2M training steps) by following a hunting strategy while learning to maximise survival to extend episodes. Notably, our Strategist framework is complementary to the underlying RL algorithm. When paired with EDE, it introduces a breeding strategy that vanilla EDE fails to uncover. This results in a **2× improvement in reward**, highlighting how high-level strategic guidance enhances and expands the exploration capabilities of existing methods. This also demonstrates that our method is broadly applicable across different RL approaches and can draw from the strengths of each method depending on the environment of interest. ## Comparison with LLM-based approaches We have expanded our evaluation to include two prominent LLM-based baselines to clarify our method's advantages. First, we implemented an **LLM-as-Policy** baseline using GPT-4o-mini, which selects actions at each step directly. This baseline achieved only 0.43 ± 1.01 reward (N=15), significantly underperforming compared to our Strategist framework. Furthermore, its computational cost is high due to requiring inference at each timestep, which underscores its impracticality for complex environments. Second, we compared our method to **Eureka** [3], an LLM-based approach designed for iterative reward discovery using GPT-4o. Eureka’s primary objective is finding high-quality reward functions, which often comes at the expense of sample efficiency. In our experiments, Eureka's evolutionary search (population size 3, depth 3) did not yield rewards meaningfully superior to the default environment reward, performing equivalently to baseline PPO despite utilizing greater exploration resources. For completeness, we will also provide results with a deeper Eureka run in the revised manuscript. These results clearly illustrate that our Strategist framework—translating high-level LLM-generated strategies into actionable rewards—is significantly more effective and sample-efficient than both direct LLM action-selection and standard iterative reward optimization methods, especially in complex, strategic domains. ## Modified Crafter environment Regarding the modification of the Crafter environment, our intention was to explicitly showcase the benefits of strategic exploration facilitated by our approach. By adjusting the environment to feature clear alternative strategies, such as hunting versus breeding, we aimed to rigorously test the Strategist Agent's ability to identify and exploit distinct high-level strategies effectively. ## Comparison with SayCan We thank the reviewer for highlighting important differences between our method and SayCan. Indeed, SayCan addresses bottom-up planning in robotic environments, which typically admit defined primitive actions. In such contexts, feasibility assessment through pretrained value functions is straightforward but requires costly pre-training of low-level actors, limiting sample efficiency. Conversely, our top-down Strategist Agent is tailored for complex strategy-driven environments that inherently have a wide range of possible high-level actions. Unlike SayCan, our method does not rely on a fixed set of predefined primitives, making it suitable for broader strategic exploration. Our approach fundamentally capitalizes on the domain-agnostic strategic planning capabilities of LLMs combined with language-conditioned RL, enabling iterative refinement and robust exploration. Thus, our framework is uniquely positioned to effectively address settings where bottom-up methods like SayCan would be less feasible or impractical. Thank you again for your valuable feedback and suggestions. ## References [1] Hafner, Danijar, et al. "Mastering diverse domains through world models.". [2] Jiang, Yiding, J. Zico Kolter, and Roberta Raileanu. "Uncertainty-driven exploration for generalization in reinforcement learning.". [3] Ma, Yecheng Jason, et al. "Eureka: Human-Level Reward Design via Coding Large Language Models.". --- Rebuttal Comment 1.1: Comment: The additional experiments do indeed improve the paper. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging that our additional experiments with DreamerV3, EDE, and LLM-based baselines have improved the paper. We are glad these comparisons addressed many of your prior concerns. To further confirm the generality of our approach, we also conducted **additional experiments** evaluating **Strategist+EDE** on the **original** (official) Crafter environment, which includes built-in achievement-based rewards. The Strategist proposed four distinct approaches to collecting achievements—Fight, Craft, Resources, and Needs (see the center top panel in the results, linked below)—each prioritizing a subset of achievements. We then compared each to vanilla EDE. Full results are available [**here**](https://imgur.com/a/fTj7F2G). We briefly summarize these compelling new results below, further highlighting the flexibility and general applicability of our Strategist framework: ### Additional Experiments in the Original Crafter Environment 1. **Overall Achievement Gains (Fig. 1):** Strategies focused on Fighting, Crafting, and Resources outperform baseline EDE by approximately +1.58, +1.48, and +1.35 achievements per episode, respectively (baseline average: 7.48). - The Needs strategy performs similarly to baseline, since survival-related achievements are easily reached through exploration. 2. **Strategy-Specific Milestones (Fig. 2):** - Each strategy uniquely excels in tasks aligned with its intended focus: - **Fight** significantly accelerates achievements related to weapon crafting (stone swords) and combat tasks (e.g., defeating skeletons). - **Craft** builds stone swords and pickaxes, and together with **Resources**, they are the only ones to succeed in advanced resource collection tasks (e.g., gathering iron, see Fig. 3), all of which are undiscovered by baseline EDE. - Figure 2 illustrate these gains clearly, demonstrating rapid achievement of specialized goals that baseline methods rarely or never discover. 3. **Distinct Strategy Footprints (Fig. 3):** - The heatmap reveals distinct and interpretable patterns of achievement completion that uniquely characterize each strategy, clearly emphasizing the specific tasks each was designed to target. Anecdotally, when provided with this table with blanked-out column names, an o1 model can accurately infer which achievement completion profiles correspond to each strategy, highlighting the clarity and effectiveness of policy steering. Collectively, these additional results unequivocally demonstrate that our Strategist framework generalizes effectively, delivering substantial improvements in exploration efficiency and task-specific performance across diverse scenarios. We hope these new results further strengthen the evidence for the broad applicability and effectiveness of our approach. Please let us know if there is anything else you would like us to clarify further or any remaining issues you wish us to address. We sincerely welcome your feedback to ensure the final version meets the highest possible standards. Thanks again for your constructive input and for helping us strengthen the paper.
Summary: This paper defines a new hierarchical RL framework through introducing a Strategy Problem: finding distributions over policies that balance specificity and value. This involves using LLM to generate sub-goals, and a reward shaping method to translate these sub-goals to quantitative feedback for RL. The proposed method aims to define a new type of a model-based RL, which uses LLM for sub-goal definition. ## update after rebuttal I appreciate the authors responses. The authors have addressed my concerns, and I have no objections to accepting this paper. I have increased my rating by a point. Claims And Evidence: We formalize strategy as a distribution over policies that balances specificity (pruning the search space) and value (encompassing optimal plans). Supported We instantiate these ideas in the Strategist agent, which uses LLM-based tree search to encode domain knowledge into actionable top-down strategies without prespecifying their components. Crucially, we introduce a reward shaping methodology that translates strategies into quantitative feedback for RL. Supported We empirically validate that this procedure enables effective exploration, leading to faster convergence than a classical PPO baseline. Supported Methods And Evaluation Criteria: A standard MDP formulation is defined. This is followed by a formal definition of specificity (a small set of policies) and value (there should exist in this small set a near-optimal policy). Then, Strategy problem is defines as finding a small set of policies that includes the near-optimal policy. The authors describe a Strategist Framework that generates and explores potential strategies expressed in natural language. The framework is evaluated on a modified version of the Crafter environment, where the task is collecting meat at different levels of difficulty. (more details coming here...) Theoretical Claims: I have read the paper, but have not carefully checked the proofs in the Appendix. Experimental Designs Or Analyses: The authors describe their approach as human-like, however they neither discuss human behavioural literature, nor ground their framework in human data or human experiments. This claim should be removed. The actual goal of this paper comes across as producing a model-based RL method that leverages LLM. I would still recommend accepting the paper based on this. Supplementary Material: I did not see a supplement, but there is a long appendix. The authors clearly did a lot of work. I did not review the entire appendix. Relation To Broader Scientific Literature: The references related to RL models with LLM planning are well discussed, however the authors claim without evidence their model to be human-like. There are many models of human planning, including models of human planning in problems that require simplification, or structural inference, reductive cognitive map representations .. they would need to be discussed if the authors wanted to justify their human-likeness. Essential References Not Discussed: If the authors would like to ground their method in human planning, they would have to discuss that literature, but I suspect that is not the goal. One suggestion of a related paper: https://arxiv.org/pdf/2310.11614 Other Strengths And Weaknesses: Strength: Well written paper, lots of stuff in the Appendix. Has proofs and theory. I like the approach to formalizing what it means to search over a small number of sensible policies. The main weakness is presentation: the authors are overselling their model as a New Paradigm in RL, which it is not. The sub-goal discovery method in this paper is new, and the authors can claim that they improve on existing approaches, but model-based RL is a very well established area. The authors also claim human-likeness without a justification. Other Comments Or Suggestions: PPO in the abstract not defined Line 171: PAC not defined Questions For Authors: . Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for recognizing the strengths of our paper, including the clarity of writing and insightfulness of the formalism. ## Clarifying the "human-inspired" framing We acknowledge your concern regarding our use of "human-like" to describe the approach. Upon reflection, we agree that the term could imply unintended commitments to psychological realism, which was not our goal. We have revised the paper to **replace "human-like" with "human-inspired,"** which better captures our intent to indicate that our method leverages high-level strategic decomposition reminiscent of human planning processes, rather than modeling human cognition per se. While our primary objective indeed is not to ground our approach explicitly in human behavioral data or cognitive models, we recognize that some readers may find connections to this literature insightful. Therefore, we have included a **targeted discussion in the appendix**, briefly summarized here: > Hierarchical decomposition of goals has been central to human planning since the foundational work of Miller, Galanter, and Pribram (1960), and Lashley (1951). Humans naturally generate and evaluate multiple strategies, selecting among them by optimizing a cost–benefit tradeoff, as described by Lieder and Griffiths (2017). Moreover, humans utilize structural inference to abstract and simplify complex tasks (Gershman & Niv, 2010), supported by cognitive maps and mental models for flexible, model-based planning (Wang & Hayden, 2020). Lastly, the principle of bounded rationality highlights that humans regularly prune decision trees and favor simpler, cognitively less demanding plans (Huys et al., 2012; Lai & Gershman, 2024). These insights substantiate our strategist agent as balancing specificity (simplification and structure) against value (optimality), mirroring established principles of human decision-making. We hope this clarification addresses your concern effectively. ## Additional comments and revisions We appreciate the point regarding our characterization as a "New Paradigm in RL." Upon reconsideration, we agree this phrase could be overstated. We have thus **updated the description to "framework,"** which we believe is well supported by the formalization presented in Section 2. We also thank the reviewer for highlighting specific presentation issues, such as the undefined terms "PPO" and "PAC." We have addressed these explicitly in the revised version to enhance clarity. Finally, we have **expanded our experimental evaluation** to include **strong RL baselines (DreamerV3, EDE)** and **LLM-based approaches (LLM-as-policy, Eureka)**. For detailed results showing our method's advantages over these baselines, please see our responses to the other Reviewers. Thank you again for your valuable feedback.
null
null
null
null
null
null
Nonparametric Modern Hopfield Models
Accept (poster)
Summary: This work proposes a non-parametric procedure to construct retrieval dynamics maps for modern Hopfield networks, based on a supervised support vector regression-like problem where contaminated patterns serve as a training data. The proposed procedure is shown to recover the standard dense retrieval dynamics from Ramsauer et al. (2020) as a particular choice of kernel. Building upon this procedure, the authors show that more efficient dynamics, based on applying a mask to the pattern samples, lead to sparse map with sub-quadratic complexity, while preserving desirable properties such as one-step retrieval of the patterns. Retrieval error upper-bounds for these Sparse-Structured models is also are also discussed. Claims And Evidence: The core of the manuscript is to propose a new method for designing retrieval dynamics maps for modern Hopfield networks. A few theoretical properties of the method are discussed, and these claims are supported by rigorous mathematical proofs. Numerical evidence for the advantage of the sparse-structured maps is also discussed in the appendix. Overall, it is not very clear what are the drawbacks of the proposed method. For someone not familiar with the modern Hopfield literature, it would have been nice to have a broader limitation discussion in the paper. Methods And Evaluation Criteria: The bechmarking presented in the appendix uses standard data sets for retrieval tasks. It would have been nice to have experiments with synthetic data satisfying all the assumptions in the theory, to illustrate tasks for which the sparse method can significantly outperform the dense method. Theoretical Claims: I skimmed through the proofs, and it looks sound. It mostly leverages standard results in SVR adapted to the setting of the paper. Experimental Designs Or Analyses: No. Supplementary Material: I skimmed through the SM. Relation To Broader Scientific Literature: From my reading, the related literature on dense and sparse modern Hopfield networks is well acknowledged. However, I am unfortunately not familiar with this literature. Essential References Not Discussed: Unfortunately I am not familiar with this literature. Other Strengths And Weaknesses: Overall, the paper is well-written and easy to follow. I appreciate the authors introduce modern Hopfield networks, as it helps the unfamiliar reader putting the results in perspective. The main weakness is the lack of a critical discussion of the proposed method. All the results are seemingly positive, and a lack of a honest "limitation" section describing the main drawbacks raises a red flag for the unfamiliar reader. Other Comments Or Suggestions: I strongly encourage the authors to discuss the limitations of their work in perspective with the related literature. Questions For Authors: - In condition (T1), do you really need to monotonically decrease $E(x)$? This is strong, in principle rules out stochastic dynamics rules, e.g. SGD. - Below eq. (3.2), are the inequalities applied to vectors component-wise? It would be nice to precise this in the text. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### We thank the reviewer for the valuable comments. We have revised our paper to address each point. The revised draft is in this [anonymous Dropbox folder](https://www.dropbox.com/scl/fo/81sygn18f4ma3ridm1xlf/AHFYlvzMMlYZnNRhBN9U8mw?rlkey=e1tvpqs6v83kx2rspfvmfgswh&st=z9iuk4iu&dl=0). All modifications are marked in BLUE color. Thanks! --- ### **1. Critical Discussion on Limitations (Newly Added Appendix B.2 https://imgur.com/a/R4PLZMs)** - **Fixed Feature Mapping (Reviewer Concern: Realistic Embedding Scenarios)** - We base our theoretical analysis on a fixed feature mapping. - This choice streamlines convergence proofs but restricts the model if features evolve. - Yet, **this condition is easy to meet. Moreover, in practice, static memory model do not have evolving feature map.** - **Robustness to Random Masking (Reviewer Concern: Potential Retrieval Failures)** - Our retrieval guarantee assumes the correct memory item remains in the support set. - If the random mask excludes it, retrieval may fail. - **While this assumption is restrictive, it is necessary for tractable analysis.** - **Well-Separated Patterns for Exponential Capacity (Reviewer Concern: Data Correlations)** - Our exponential capacity analysis requires patterns to be sufficiently distinct. - This simplifies theoretical arguments, yet real datasets often have correlated patterns. - Yet, **this setup is standard for characterizing capacity in literature** (Krotov & Hopfield, 2016; Demircigil et al., 2017; Ramsauer et al., 2020; Iatropoulos et al., 2022; Hu et al., 2023; Wu et al., 2024b; Santos et al., 2024a;b). Moreover, it can be relaxed by optimizing the Hopfield-energy landscape for larger memory capacity. - **Unproven Extensions in Section 5 (Reviewer Concern: Additional Convergence and Capacity Proofs)** - We outline several model extensions but do not provide full proofs (no convergence and capacity guarantees). We see these points as opportunities for growth. Addressing them will extend our framework to broader settings and make retrieval more robust. We also invite the reviewer to check the newly added Appendix B.2 for detail in our latest revision. --- ### **2. Experiments and Sparsity (Methods And Evaluation Criteria)** - **Our current experiments already simulate the conditions the theory addresses**, especially regarding sparsity of patterns. - Namely, our experiments already simulate both sparse (MNIST) and denser (CIFAR-10) data. This aligns with our theoretical assumptions. - **Our current numerical results already justify our theory.** Figure 2 in Appendix G.1 shows sparse models perform better on MNIST. This matches our claim that higher sparsity improves retrieval capacity and reduces error (Proposition 4.1, Theorem 4.1). - In MIL supervised experiments, sparse models converge faster (Corollary 4.1.1). These results match the modern Hopfield literature (e.g., Hu et al. 2023, Wu et al. 2024, Ramsauer et al. 2020). We have revise the draft to emphasize these: * Revised G.1 "Results": https://imgur.com/a/OS2q77m and * Revised Fig 2 "Caption": https://imgur.com/a/JjxZwxv * Revised G.2 "Results", "Captions" of Fig 3, Fig 4: https://imgur.com/a/50mFsX5 --- ### **3. Technical Clarifications** - **Monotonic Decrease of $E(x)$ (Condition T1)** Our convergence analysis assumes that each update decreases (or does not increase) the Hopfield energy $E(x)$. **This is a standard property of energy-based Hopfield models** and guarantees deterministic convergence to a fixed point. It does not apply to stochastic or non-monotonic updates. We emphasize that **the Hopfield memory model serves a different purpose than deep learning models**: here, minimizing energy corresponds to pattern retrieval, not parameter learning. Comparing this condition to stochastic optimization in deep learning (e.g., SGD) is not appropriate—the objectives and dynamics differ fundamentally. - **Inequalities Below Eq. (3.2)**: We confirm these are component-wise inequalities. We clarify this in the revised text (https://imgur.com/a/wu2eW9A). --- We appreciate the reviewer’s time and attention to details. The revisions should address the concern about missing limitations and clarify our theoretical and experimental contributions. We hope the updated manuscript meets your expectations. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebutal and for accounting for my suggestion. I am keeping my score. --- Reply to Comment 1.1.1: Comment: Thank you again for your thorough review. We hope our clarifications and revisions have fully addressed your concerns. Please let us know if further clarification or revision is needed. We greatly appreciate your time and effort.
Summary: This is a theoretical paper that introduces a non-parametric interpretation of Hopfield Nets. The proposed method uses SVR to learn a parameter matrix $W$ mapping from feature space to data space. Additionally, rather than focusing on "memorizing data" as the original Hopfield Nets do, the authors propose to frame the AM task as a map from queries to memories, where the queries can be considered corrupted versions of the memories (auto-association), or units that are distinct from the keys (hetero-association). Claims And Evidence: The authors propose a novel "Nonparametric Hopfield" (NPH) Models, and claim this technique 1. Is compatible with modern Deep Learning architectures (this is not novel as the [Hopfield Layers](https://github.com/ml-jku/hopfield-layers) integrates similar ideas into modern architectures). 2. Comes with with strong (sub-quadratic) efficiency advantages (this is not novel since sub-quadratic complexity is also achieved by e.g., Performer) due to sparsity and use of basis functions. 3. The non-parametric Modern Hopfield framework can be used to formulate many existing methods. Given that none of the above claimed contributions are novel on their own, it seems that the fundamental contribution and characterization of their NPH framework itself. Evidence for these contributions are proofs in the appendix. Methods And Evaluation Criteria: See "Experimental Designs & Analyses" Theoretical Claims: 1. SVR can be used to train a matrix $\mathbf{W}$ that stores (query, memory) pairs 2. This framework can be extended to several modern Hopfield Networks (e.g., Ramsauer et al. Choromonski et al., Beltagy et al.) as an extension. I did not check the proofs for these claims in the appendix. The definitions and methods are sound (see "questions" for my confusion about sparsity). Experimental Designs Or Analyses: The paper offers no experimental validation of their claims in the main paper. However, Appendix G offers a study on memory retrieval and learning. Given my time and because this section has been relegated to the appendix, I will give only brief overview and comments: I have a difficult time interpreting the results as the plots are noisy (many overlaid lines. Unclear what the takeaway message is for each figure). I have several questions on this section and I believe it can be improved substantially: 1. Are these experiments tested in the 1-step retrieval paradigm or is recurrence involved? 2. Fig 2 caption says that the masking did not work well, but I don't understand the justification of "violating the $\mu \in \mathcal{M}$ assumption. How is "masking" even applied? Via zero-ing out pixels? Replacing pixels with gaussian noise? Adding gaussian noise? Surely there must be a "masking" choice that keeps the correct pattern in the memory set. 3. How can I understand these results in light of the theoretical guarantees of the paper? Would be nice to see a reference line for what the theory would guarantee, or if the theory does not specify a bound for a particular experiment to explicitly mention that. Additionally, some of the plots are really noisy and its hard to tell what your method is. 4. Would be nice to see some qualitative pictures to understand what the experiments test. I will continue my review as though this is a purely theoretical paper since these results do not add additional insights or clearly confirm the proposed theory. Supplementary Material: The supplementary material is long but clearly organized and consists of: - Related work - Background on SVR - Proofs of theorems & lemmas in the main text - Examples of how to understand formulations of MHNs as NPH models. - Simple experiments Given the sheer length of the appendices, I was not able to dedicate sufficient time to understand the supplementary. I read the experiments, related work, and background. Relation To Broader Scientific Literature: Specifically, NPH models store memories in a matrix that maps features back to the data space. This is a very general framework with which we can understand many recent optimizations of the MHN, esp. in the attention operation. Essential References Not Discussed: It is worth mentioning [Hoover et. al 2024](https://arxiv.org/abs/2410.24153) for its use of random features in a *recurrent* framework for Associative Memory that converges to actual (not just generalized) fixed points. Other Strengths And Weaknesses: Strengths: - **A general, novel approach**. The generalizability of this framework to existing methods of single-step associative memories is compelling. Weaknesses: - **Unclear empirical results**. It is not clear how the empirical results included in the appendix can be used to make any statements as to the efficacy and potential of this approach. Additionally, because the error in this method depends on how close together the memories are, it is unclear how strong the bounds are on large datasets. - **Missing necessary background**. The introduction is difficult to read without strong background on papers Hu et al. and Wu et al. published in the past two years. The background explains this paper's relevance to attention (P3), describing the one-step update rule of Ramsauer et al. and the desiderata for learning an associative memory (i.e., must satisfy a monotonically decreasing energy function (T1) that converges to fixed points (T2)), but it does not elucidate the claims made in the introduction regarding sparsity (P2) and efficiency (P1). It would be nice if the related work section in the appendix (which I hope has a chance to make it into the main paper in some form when accepted) could serve as all the necessary background needed to understand this paper's contributions. Other Comments Or Suggestions: Please take my comments with a grain of salt since there were several aspects of this work that I did not understand. 1. The paper would benefit from a formal definition/justification of what "non-parametric" means in the context of NPH models. Isn't the matrix $\mathbf{W}$ the parameter matrix? For example, does the definition used to describe the SVR learning task work for understanding the NPH models?: > one that does not assume a specific functional form, and is flexible in the number of parameters. [L256] 2. The statement "competitive alternatives with memory-enhanced functionalities" [L415 right col] requires some experimental justification which should be discussed in the main paper. 3. For clarity of exposition, I would recommend the term "query" be used instead of "contaminated memory patterns" $\xi_\mu + \delta \xi_\mu$ (as currently described in e.g., Lemma 3.1 and [L200 right col]), as used in the beginning of sec 3.1 and again in 4.1. I believe this will help sell the message that this framework applies to both auto- and hetero- associative memories in the important introduction to the method. Questions For Authors: 1. I don't understand what's special about the Sparse-Structured formulation of NPH models. Definition 3.3 is simply the SVR optimization problem of Eq. (3.2) on a subset of the data points. If I understand correctly, you will never learn those data points which are not masked? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the constructive feedback. We have revised our paper to address each concern in detail. The revised draft is in this [anonymous Dropbox folder](https://www.dropbox.com/scl/fo/81sygn18f4ma3ridm1xlf/AHFYlvzMMlYZnNRhBN9U8mw?rlkey=e1tvpqs6v83kx2rspfvmfgswh&st=z9iuk4iu&dl=0). All modifications are marked in BLUE color. --- ### 1. Empirical Clarity and Figure Explanations > **Comment:** The reviewer found our experimental setup unclear regarding retrieval steps, masking strategy, and theoretical connections. They also requested stronger links between figures and the underlying theory. **Response:** 1. **One-Step Retrieval:** We now explicitly state that *all experiments use a single feed-forward (one-step) retrieval* without iterative updates: https://imgur.com/a/lAJLKR6 2. **Masking Strategy:** The masking is done on similarity score level following random masked attention. It sometimes removes the target memory if done randomly (e.g., Random Masking). This is what the statement in Fig2 caption is about. In contrast, top-$k$ masking retains the most relevant entries and preserves the correct memory in the set. 3. **Linking Theory to Experiments:** We have revised Appendix G.1 and G.2 to include direct references to the theorems and remarks in the main text: - **Figures 2 & 3 (Thm 4.1, Prop 4.1, Coro 4.1.1):** These results highlight memory capacity limits and robustness under sparsity or noise. - **Figure 2 (Remark 4.2):** Explains how different noise levels affect retrieval. - **Figure 4:** Demonstrates quick convergence and stable performance under $\epsilon$-sparse retrieval. We note that random masking can remove the target memory, violating the $\mu \in \mathcal{M}$ assumption (Theorem 3.2). In the revised draft, we have updated the figure captions and added clarifications such that readers can immediately see the theoretical motivations for each experiment. Examples of these changes appear in the updated Appendix G.1 ("Results") and G.2 ("Captions" for Figures 2–4). E.g., https://imgur.com/a/OS2q77m and https://imgur.com/a/JjxZwxv and https://imgur.com/a/50mFsX5. --- ### 2. Focus on Hopfield Memory Models & Novelty > **Comment:** The reviewer questions the distinction between Hopfield networks and Hopfield memory models, and the novelty of our approach. **Response:** We emphasize that **Hopfield memory models** and **Hopfield networks/layers** are conceptually distinct: - **Hopfield memory model**: A fixed content-addressable **memory model** with no training; retrieval is based on similarity to stored patterns. This is our focus. - **Hopfield network (e.g., Hopfield Layer)**: A **neural layer** integrated into deep learning, trained with backprop. This builds on the “Dense Associative Memory–Transformer Attention” correspondence (Ramsauer et al. 2021), later generalized by Hu et al. (2023) and Wu et al. (2024). Our work **extends the prior modern Hopfield memory model** by framing it as a nonparametric regression problem. This allows efficient variants (Sec 3–4) and connects it to attention mechanisms (e.g., Performer) under a rigorous, unified theory. To our knowledge, this unification has not been done before and constitutes a novel contribution. We have added a remark regarding above points https://imgur.com/a/E93HSM5. In the final version, we promise to move this remark to Sec 2 (background) if space allows. --- ### 3. Terminology & Background > **Comment:** The reviewer recommended refining terminology and providing enough background to ensure clarity. **Response:** - **Cite Hoover et al. 2024:** https://imgur.com/a/Ix7cp7n - **Question: Masked Target?** Yes, if a pattern is masked, it will not be memorized by the SVR. This allows us use different masking strategies to accompany different sparsity patterns in sec 3.2 & E. - **Nonparametric:** Yes, it does describe NPH. Despite having a matrix $W$ in the **primal** formulation of SVR, the model parameter is not fixed. This is more obvious in the **dual** form of SVR. In sec 3, we cast the retrieval dynamics $\mathcal{T}$ as SVR. The number of memories is proportional to to the number of support vectors, and hence proportional to model size. Thus the model size is not fixed. This is nonparametric (Remark 3.4). We had revised the Appendix C to strengthen this: https://imgur.com/a/E3Fnic5. - **Background in Main Text:** We agree that the related work section can provide more background to main text . When finalizing the paper, we will try our best to integrate this background if space allows. - **“Memory-Enhanced Functionalities” (L415, right):** We have added a remark to clarify this in Appendix F (https://imgur.com/a/E93HSM5). In the final version, we promise to move this remark to Sec 2 (background) if space allows. This remark ties them to the benefits of content-addressable memory systems in real-world applications. --- Thank you for the review. Happy to discuss more :) --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions and updating your draft for clarity. I am satisfied with the updated changes in the manuscript and believe the paper is now a stronger submission. I am updating my score to a 4. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thorough review and for increasing the score. We are pleased that our revisions addressed your concerns and truly appreciate your feedback.
Summary: This work leverages the concept of soft-margin Support Vector Regression (SVR) to reformulate modern Hopfield models as a non-parametric regression task, where a noisy target pattern is mapped to a reconstructed target pattern using Support Vector Machines (SVMs). By applying the Lagrange multiplier method, the problem is transformed by incorporating boundary conditions, ultimately deriving the mapping function $T_{SVR,\Phi}$, which serves as the update rule for the model. Within this unified framework, the authors successfully reproduce the standard dense modern Hopfield model while also proposing a generalized Sparse-Structured Modern Hopfield Model. This new formulation significantly reduces computational complexity and introduces three specific efficient variants: Random Masked Modern Hopfield Model, Efficient Modern Hopfield Model, Top-K Modern Hopfield Model. Furthermore, the authors conduct a rigorous theoretical analysis under this framework, leading to more precise sparsity-dependent retrieval error bounds and proving the fixed-point convergence of the sparse-structured Hopfield model. Claims And Evidence: The submission makes three main claims, each of which is examined below regarding its supporting evidence and potential concerns. Claim #1: The paper provides a nonparametric framework and sub-quadratic complexity * Supported: The paper reformulates modern Hopfield models using soft-margin SVR, demonstrating a nonparametric regression-based approach. Chapter 3 provides a formal derivation of the Sparse-Structured Modern Hopfield Model, proving its reduced computational complexity. Appendix G.5 provides empirical FLOPs and runtime comparisons. * Concerns: The computational efficiency improvements rely on asymptotic analysis, but practical benefits are not always consistent across implementations (e.g., PyTorch’s inefficiency with random masked Hopfield). Claim #2: The paper provides a formal characterization of retrieval error bounds, noise robustness, and memory capacity. * Supported: Tighter retrieval error bound (Corollary 4.1.1, 4.1.2) Stronger noise robustness (Remark 4.2) Exponential capacity scaling (Lemma 4.1, Proposition 4.1) * Concerns: These results seem natural extensions of classical Hopfield models, raising the question of whether they provide fundamentally new insights. Theoretical error bounds are not numerically validated—while the framework is well-formulated, empirical verification of retrieval error predictions is missing. Claim #3: The framework bridges modern Hopfield models with attention mechanisms (e.g., BigBird, Longformer, Linear Attention). * Supported: The theoretical formulation indeed connects the proposed models to various attention mechanisms (such as BigBird, Longformer, Linear Attention, and Kernelized Attention). Appendix G contains numerical experiments, showing performance metrics such as loss curves and accuracy trends. * Concerns: The experimental section lacks clear explanations of what conclusions should be drawn. Weak connection between experiments and theoretical findings. Methods And Evaluation Criteria: The main body of the paper primarily focuses on theoretical derivations, while experiments are mainly presented in the appendix. Each experiment includes a corresponding description of the models and parameters used, ensuring clarity in implementation details. The concerns regarding the experimental validation have already been discussed in the previous question. Theoretical Claims: I carefully reviewed the theoretical analysis presented in the main text and found the derivations to be well-structured and logically sound. Additionally, I briefly browsed the proofs in appendix C and D and did not notice any apparent issues. Experimental Designs Or Analyses: I reviewed the experimental design and analyses, particularly those presented in Appendix G. The experiments are well-structured, with clear descriptions of the models, parameters, and evaluation metrics such as ACC, Loss, FLOPs, and computational time. The FLOPs and runtime comparisons in Appendix G.5 provide empirical validation of the efficiency of different sparse Hopfield variants. The results confirm that Random Feature Hopfield, Linear Hopfield, and Window Hopfield achieve reduced FLOPs with minimal performance loss, supporting the claim of computational efficiency. However, some concerns remain: * Empirical validation of theoretical retrieval error bounds is missing. While the theoretical framework is rigorous, numerical verification comparing predicted retrieval errors with actual retrieval performance would further strengthen the findings. * Lack of clear interpretation of experimental results. In Appendix G.1–G.4, the figures and tables present results, but there is no explicit discussion of how these findings support the theoretical claims. A clearer explanation of the key takeaways would improve the clarity and impact of the experimental section. Supplementary Material: I mainly reviewed Appendix ABCD and G. Relation To Broader Scientific Literature: This paper introduces a novel approach by leveraging soft-margin SVR to reformulate modern Hopfield models as a non-parametric regression task. To the best of my knowledge, this specific formulation has not been explored in prior work, making it a notable contribution to the field. Building on this framework, the paper proposes a generalized Sparse-Structured Modern Hopfield Model, which aims to enhance computational efficiency and scalability. However, based on the experimental results in the appendix, the proposed model does not consistently outperform existing methods. Specifically, Hu et al., 2023 appears to achieve better capacity and robustness in many cases. Furthermore, while the paper derives sparsity-dependent retrieval error bounds and proves fixed-point convergence, some of the additional conclusions—such as tighter retrieval error bounds and stronger noise robustness—are relatively intuitive extensions of classical Hopfield networks. These properties are already well understood in the standard Hopfield model, raising the question of whether the new framework offers fundamentally novel insights beyond a reformulation of known results. Essential References Not Discussed: When discussing Hopfield networks, please also reference Amari’s earlier work (e.g., Amari, 1972), which laid a more detailed mathematical foundation for associative memory models and preceded Hopfield’s 1982 formulation. Other Strengths And Weaknesses: One of the major strengths of this paper is its clear and well-structured presentation. Despite being a theoretical work, the explanations are concise, logically organized, and easy to follow, making the complex mathematical formulations accessible. The clarity of writing ensures that the theoretical contributions are well-communicated without unnecessary complexity. Other Comments Or Suggestions: On line 100, right side, the phrase "We also verify their efficacy through thorough numerical" is repetitive. Questions For Authors: 1. The paper provides rigorous theoretical retrieval error bounds, but there is no numerical verification comparing predicted vs. actual retrieval performance. Could you provide empirical validation to confirm that the theoretical bounds accurately reflect practical performance? 2. The appendix suggests that Hu et al., 2023 achieves better capacity and robustness in many cases. What are the key advantages of the proposed model over Hu et al., 2023, beyond computational complexity? 3. The figures and tables in Appendix G.1–G.4 lack clear explanations regarding how they support the theoretical claims. Could you provide a more explicit discussion of the key takeaways from these results? 4. Some conclusions, such as tighter retrieval error bounds and stronger noise robustness, seem to be natural extensions of classical Hopfield models. What new theoretical insights does your framework provide beyond these expected properties? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Reviewer’s Comment (Claims and Evidence & Experimental Designs or Analyses) > **Concerns** > - Claim2: The theoretical results (noise robustness, tighter retrieval error bounds) do not appear numerically verified in detail. > - Claim3: The experimental section doesn’t explicitly connect to the theory. > The paper lacks explicit discussion tying the figures/tables in Appendix G.1–G.4 to the theoretical claims. > Empirical validation of the theoretical retrieval error bounds is missing. ### **Authors’ Response** **In response to Concerns of Claim 2**, numerical verifications of the retrieval error bounds appear in **Sec. G.1** and are summarized in **Fig. 2** (pages 29–30). **In response to Concerns of Claim 3,** we believe there might be an oversight. In the “Results” paragraphs, as well as the captions of **Figs. 2 and 4**, we explicitly linked our numerical findings to their underlying theoretical claims. We feel that these “Results” paragraphs and captions are sufficiently informative (also check our concluding remarks, line 411-420, right). We apologize for any confusion caused. We have revised the draft with better clarity (e.g., https://imgur.com/a/OS2q77m and https://imgur.com/a/JjxZwxv and https://imgur.com/a/50mFsX5.) --- ## Reviewer’s Comment (Relation to Broader Scientific Literature) > Certain properties, such as tighter error bounds and stronger noise robustness, might be seen as natural extensions of classical Hopfield networks. What’s truly novel here? ### **Authors’ Response** 1. **Nonparametric Framework**: Prior works do not formulate Hopfield retrieval as a **soft-margin SVR**, which is our key novelty. By bridging Hopfield networks with regression theory, we gain new insights and design flexibility (e.g., kernelized or masked retrieval). 2. **Sub-Quadratic Sparse Hopfield**: We extend the dense model to a *computationally efficient* variant while retaining classical benefits (exponential capacity, fixed-point convergence). 3. **Sparsity Analysis**: Our results detail *when* sparse retrieval can outperform dense retrieval, complementing prior sparse Hopfield analyses (Hu et al., 2023). --- ## Reviewer’s Comment (Essential References Not Discussed) > ... Amari (1972) for earlier foundational work... ### **Authors’ Response** Thanks for the suggestion. We had updated the draft accordingly (https://imgur.com/a/IYMKyu3). --- ## Reviewer’s Comment (Questions for Authors) > 1. Are the theoretical retrieval error bounds actually verified numerically? > 2. How does this approach compare to Hu et al. (2023), which reportedly achieves better capacity and robustness in some cases? > 3. Could you clarify the main takeaways of the figures/tables in Appendix G.1–G.4? > 4. Are these “tighter” bounds and “stronger noise robustness” truly novel beyond standard Hopfield extensions? ### **Authors’ Response** 1. **Numerical Verification (Q1)** - Yes. **Sec. G.1 (Fig. 2)** (pp. 29–30) shows empirical vs. predicted retrieval error for varied sparsity levels, confirming Theorem 4.1. Please also see https://imgur.com/a/JjxZwxv. We can add further numeric comparisons if requested. 2. **Comparison with Hu et al. (2023) (Q2)** - **Fundamentally Different Mechanisms**: Hu et al. rely on a data-**dependent** entropic regularizer (still $O(n^2)$). Our approach imposes data-**independent** masks for **sub-quadratic** retrieval. - **Focus on Efficiency**: We do **not** claim consistent accuracy superiority; rather, we stress efficiency (Thm 3.2). - **Extended Theory**: Our SVR-based approach and explicit sparsity scaling (Theorem 4.1, Remark 3.6) broaden the theory beyond Hu et al.’s entropic analysis. 3. **Main Takeaways in Appendix G (Q3)** - **Sec. G.1–G.4**: We evaluate different sparse strategies (top-$k$, random masks, etc.) under noise and large-scale tasks. Each figure corresponds to Thm 4.1 or Coro 4.11 or Prop 4.1, showing that careful sparsity can match or surpass dense retrieval in sparse regimes. 4. **Novel Theoretical Insights (Q4)** - We recast retrieval as a **soft-margin SVR** problem, yielding **margin-based error bounds** and **stronger noise-robustness** than purely energy-based or entropic methods (e.g., Hu et al., 2023). - This framework supports a **diverse range of sparsity schemes**—top-$k$, random masking, or learned patterns—yet still guarantees **fixed-point convergence** and **exponential capacity**. - **Explicit sparsity-dependent bounds** show how selectively ignoring irrelevant patterns can *improve* retrieval, providing concrete criteria for balancing accuracy and efficiency. - Consequently, these “tighter” bounds and “stronger” robustness are not minor tweaks to classical Hopfield models; rather, they stem from the **SVR-based margin analysis** and support **sub-quadratic** implementations for large-scale memory retrieval. --- Thank you for the review. Happy to discuss more :) --- Rebuttal Comment 1.1: Comment: I have carefully read the authors’ response and the revised version of the paper. I sincerely appreciate the authors’ efforts and clarifications. The content in Appendix G is now significantly clearer than before, and the alignment between theory and experiments is much improved. While these changes have enhanced the presentation, they do not substantially change the basis of my original evaluation. As such, I do not intend to change my score at this point. --- Reply to Comment 1.1.1: Comment: We are glad our revisions have improved clarity. Thank you again for your detailed review and for your time and effort. Your constructive comments have certainly improved this work!
Summary: In this work, the authors replace the energy minimization step by support vector regression which trains on the pairs of a pattern and its perturbed version, regressing the true pattern, in terms of perturbed ones. They also provide a sparse version which uses a subset of patterns. In addition, they perform synthetic and realistic memory retrieval and supervised learning tasks with this model. ## Update after rebuttal I requested that the authors should clarify what they mean by the sparse model inheriting the appealing theoretical properties of the dense model so that the reader does not get a false impression, which they agreed to. I have upgraded the score to 3. Claims And Evidence: They claim they propose a nonparametric framework for deep learning compatible modern Hopfield network. Turning approximate retrieval into a regression problem is interesting, but seeing approximate retrieval as denoising makes sense. They say they introduce the first efficient sparse modern Hopfield model with sub-quadratic complexity, which is true. In the abstract, they claim the sparse model inherits the appealing theoretical properties of its dense analog. This is where I begin to get confused. Results like Corollary 4.11 showing the error of sparse recovery being lower than dense recovery only apply to patterns kept in the sparse model. Target patterns that are masked out may not get properly recovered. This seems to be borne out by the empirical results. From what I can understand the sparse Hopfield model in Hu et al 2023 is a very different thing where the regularizer is chosen so that attention is sparse. Sparsity by random masking is not the same thing. Thus their claims of sparsity-induced advantages (the topic of many of their theoretical results) ring a bit hollow to me. After the discussion with the authors, I feel they should clarify this point in the main text. Methods And Evaluation Criteria: The benchmarks and methods are fine. Theoretical Claims: The theoretical claims seem correct but they are of limited use, if sparse model only is good for subset of patterns. Am I missing something? Experimental Designs Or Analyses: They are sound. Supplementary Material: Sections C and G. Relation To Broader Scientific Literature: Replacing the energy model by a trainable support vector regression is an interesting idea with the basis functions inspired by softmax attention. Perhaps we do not need the crutch of Hopfield models anymore? Essential References Not Discussed: I am not aware of any. Other Strengths And Weaknesses: No comments. Other Comments Or Suggestions: There are a few typos, like page 4: $\mathcal T_\Xi(x):\mathbb{R}^d\to\mathbb{R}^d$. Questions For Authors: I hope I am misunderstanding something. I keep feeling the general statements made about sparse Hopfield model defined here are too good to be true, as explained above. Also why is $\Phi$ defined in Lemma 3.1 so that we weigh $\xi_mu$ with the attention between $x$ and $\xi_mu+\delta \xi_mu$, rather than with $\xi_mu$ (Eq. 3.5)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: The revised draft is in this [anonymous Dropbox folder](https://www.dropbox.com/scl/fo/81sygn18f4ma3ridm1xlf/AHFYlvzMMlYZnNRhBN9U8mw?rlkey=e1tvpqs6v83kx2rspfvmfgswh&st=z9iuk4iu&dl=0). All modifications are marked in BLUE color. Thanks! --- ### **Reviewer’s Comment (Claims and Evidence 1)** > *In the abstract, they claim the sparse model **inherits the appealing theoretical properties** of its dense analogue...Results like Corollary 4.11 showing the error of sparse recovery being lower than dense recovery **only applies to patterns kept in the sparse model**. Target patterns which are masked out may not get properly recovered. This seems to be borne out by the empirical results.* --- ### **Authors’ Response** **When we say the sparse model “inherits” the theoretical properties of the dense version, we mean it retains the key Hopfield guarantees** (e.g., alignment with transformer attention, rapid convergence to fixed-point memories, exponential storage capacity). Our results show that introducing sparsity does not break these fundamental properties. Yes, performance can degrade when target patterns are masked out. **This reflects a general accuracy–efficiency tradeoff** (see lines 421–431). However, our numerical results (Fig. 4, Sec. G.5) show that, in practice, the degradation is mild: sparse masking achieves decent accuracy while significantly improving efficiency (similar to how linear/random feature approximations to softmax perform in other contexts). --- ### **Reviewer’s Comment (Theoretical Claims)** > *The theoretical claims seem correct but they are of limited use if the sparse model is only good for a subset of patterns. Am I missing something?* --- ### **Authors’ Response** **We do NOT claim the sparse model always outperforms the dense model in all situations.** Instead, we specify **conditions** where sparse retrieval can yield better accuracy (e.g., when many irrelevant memories introduce noise). Under those conditions, our empirical results (e.g., Fig. 2, p. 30) support the theoretical claim https://imgur.com/a/JjxZwxv. --- ### **Reviewer’s Comment (Claims And Evidence 2)** > *From what I can understand, the sparse Hopfield model in Hu et al. (2023) is a very different thing where the regularizer is chosen so that attention is sparse. Sparsity by random masking is not the same thing. Thus, their claims of sparsity-induced advantages ring a bit hollow to me.* --- ### **Authors’ Response: Sparse Hopfield Model – Sparsity Masking vs. Regularization** We appreciate the comparison. To clarify, our model uses **sparse masking** (Definition 3.3), with random masking as one special case (Example 1 in line 275). In **Remark 4.1 (line 368)**, we compared our approach to [Hu et al. 2023]. Here is a brief overview: - **Different Sparsity Mechanisms** - **Ours:** Data-***independent***, predefined masking for efficiency. - **Hu et al. 2023:** Data-***dependent***, regularizer-based sparsity (but still with $O(n^2)$ complexity like [Ramsauer et al. 2020]). - **Focus on Analysis** We study how controlling the sparsity dimension $k$ affects retrieval accuracy (Theorem 4.1). Hu et al. (2023) do not provide that level of interpretability. - **No Claim of Superiority** We do not assert that random/sparse masking categorically outperforms regularizer-based methods; we emphasize their **analytical tractability** and **efficiency**. --- ### **Reviewer’s Comment (Questions For Authors)** > *Why is $\Phi$ defined in Lemma 3.1 to weigh $\xi_{\mu}$ by the attention between $x$ and $\xi_{\mu} + \delta \xi_{\mu}$, rather than $\xi_{\mu}$ itself (Eq. 3.5)?* --- ### **Authors’ Response** 1. **SVR-Based Nonparametric Interpretation for MHM** (lines 172–202; 159–169; Appendix C.1) - We cast Hopfield retrieval as a soft-margin SVR. - Each stored pattern $\xi_\mu$ is paired with $\xi_\mu + \delta\xi_\mu$. - This enforces the SVR model to map any noisy query near $\xi_\mu$ back to $\xi_\mu$. - Equation (3.2) enforces $\epsilon$-retrieval (Definition 3.2). 2. **Aligning with Basin of Attraction (lines 258–274) and $\epsilon$-Retrieval** - Hopfield models attract queries near $\xi_{\mu}$ back to $\xi_{\mu}$. - $\xi_{\mu} + \delta\xi_{\mu}$ defines a local basin around each pattern. - The SVR constraints ensure $\lvert T_{\mathrm{SVR},\Phi}(x) - \xi_{\mu}\rvert \le \epsilon$ within that basin, enabling **noise-robust retrieval** (from partial or corrupted queries i.e., Definition 3.2.) 3. **Avoiding Trivial SVR Solutions** - Using $(\xi_{\mu}, \xi_{\mu})$ only would let the system learn the **identity** (no actual denoising). - Including $\delta\xi_{\mu}$ forces **true associative memory** behavior. Overall, using $\xi_{\mu} + \delta\xi_{\mu}$ ensures the nonparametric retrieval aligns with Hopfield’s attractor dynamics, supports noise-robust memory retrieval, and prevents trivial SVR solutions. --- Thank you for the review. Happy to discuss more :) --- Rebuttal Comment 1.1: Comment: I feel the authors should clarify what they mean by the sparse model inheriting the appealing theoretical properties of the dense model so that the reader does not get a false impression. I am upgrading the score to 3. --- Reply to Comment 1.1.1: Comment: We are very happy that our rebuttal and revision have met your expectations. Yes, we will clarify the "inherit" issue thoroughly in the final version. Thank you again for your time, effort, and attention to detail! Your constructive comments have certainly improved this work!
null
null
null
null
null
null
Graph World Model
Accept (poster)
Summary: This paper introduces the first world model in the graph domain. The proposed method is capable of handling multimodal graph data within a unified framework, accomplishing tasks across multiple domains, including prediction, generation, and optimization. The graph world model demonstrates exceptional performance and generalization capabilities on these tasks. Additional experiments highlight the role of graph structures in these tasks and the ability of the graph world model to leverage them effectively. Claims And Evidence: The claims in this paper are well validated, with no significant over-claiming. Methods And Evaluation Criteria: The proposed approach makes sense, but its core idea still relies on mixing structural information into LLM and LVM, followed by fine-tuning techniques. The innovation lies in unifying different tasks, which is commendable, but the utilization of graph structures is somewhat limited. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments are diverse and well-chosen, but there are some issues with task descriptions: 1) **Missing setails** Taking graph prediction as an example, the task splits and the comparison methods are not clearly described. For instance, in the case of supervised GCN, the settings should differ significantly from those of the Graph World Model. How GCN utilizes textual information and other details are not provided. In fact, in a supervised setting, the performance of GNNs should be better. 2) **Experiment costs** The cost of using the Graph World Model has also not been reported. As a reader more focused on graph-related tasks, I am concerned with the differences in training and inference efficiency between the Graph World Model and other foundation models, as well as the feasibility of deploying the Graph World Model to large-scale graphs. 3) **Examples** The content on world generation and world optimization needs large revisions. First, for the generation task, examples of generated outputs should be provided for comparison. Second, the input and output of the tasks should be clearly explained. I understand that the authors have simplified the descriptions in the main text due to space limitations, but the supplementary details in the appendix are still limited and not enough to help readers who are not deeply familiar with these tasks understand the task settings and configurations. 4) **A mInor issue** The distinction between world generation and world optimization is unclear to me. They seem to be essentially the same type of generation task. Adding examples, outputs, and further clarifications for each task would be very helpful, especially given the broad range of domains covered in this paper. Supplementary Material: I have read the Appendix. Relation To Broader Scientific Literature: I believe this paper has a good broader impact. The concept of the World Model is an intriguing research direction, and exploring its potential to unify graph tasks would be a valuable attempt. It also provides inspiration for future work in this area. Essential References Not Discussed: I believe the paper does not have any significant references not discussed. Other Strengths And Weaknesses: The strength of this paper lies in its exploration of an interesting task setting, and integrating the World Model is a great attempt at advancing graph foundation models. Additionally, the experiments are well-designed and cover a broad range of domains, making this paper one of the most comprehensive I have read on graph foundation models. The main issues with the paper are clarity and technical novelty. In terms of clarity, the descriptions of the task details are too rough, and the comparisons between sub-tasks are limited, making it difficult to evaluate the performance of each task individually. This issue ties into the limited technical novelty that was mentioned earlier. Other Comments Or Suggestions: N/A Questions For Authors: The main questions have been listed before, and other issues require the authors to provide additional details. Once I have a clearer understanding of the experimental setup, I could raise further points. Other questions: 1) Why is INSTRUCTG2I not included as baselines? 2) In [1], multi-modal information, including images and text, has been shown to benefit predictors in traditional graph tasks. Can the graph world model achieve that as well? [1] Zhu, Jing, et al. "Multimodal graph benchmark." arXiv preprint arXiv:2406.16321 (2024). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Response to Issue 1 (Missing details):** Thanks for the comments. The settings for the baselines primarily follow LLAGA (Chen et al., 2024a) and OFA (Liu et al., 2023a). As stated in Appendix A.3, we convert all nodes and labels in the Cora, PubMed, and HIV datasets into text. For all methods, we use BERT to obtain text embedding. We divide all datasets into training, validation, and test sets in an 8:1:1 ratio. We will add these details to the next version. **Response to Issue 2 (Experiment costs):** Thanks for your questions. We answer them one by one: ***[Training and inference efficiency.]*** Accurately comparing the training and inference efficiency of GWM with other FMs is highly challenging because many FMs are not designed to address multimodal problems and utilize various architectures. We can only compare the efficiency between GWM and LLM-based FMs from principles. For GWM-T, its efficiency shows no fundamental difference from other LLM-based FMs, as both are based on the standard instruction tuning. For GWM-E, its training process only requires fine-tuning the projector as stated in ***[lines 275-277]***, and its embedding-based method also saves a significant amount of token cost, making it more efficient. For GWM-E, it takes \~7 hours (\~$\frac{1}{4}$ of GWM-T) of training time on four NVIDIA A6000 GPUs described in ***[lines 352-376]***, and the inference time per case averages 0.213s (similar to GWM-T). Moreover, GWM-E significantly reduces memory usage with a shorter token length of 140.23 (~$\frac{1}{14}$ of GWM-T). ***[Large-scale graphs feasibility.]*** GWM-E can be deployed to large-scale graphs. In section 4.2, we designed a simplified GCN. It not only reduces computational complexity (Wu et al., 2019; He et al., 2020a) but also benefits from PyTorch's sparse matrix multiplication acceleration, allowing it to scale efficiently to large graphs. As shown in ***Tables 10 and 11***, we have conducted experiments with large graphs at the scale of hundreds of thousands, demonstrating GWM's ability to address computational complexity and scalability. **Response to Issue 3 (Examples) and 4 (A mInor issue):** Thanks for the feedback. These tasks can all be framed as generation tasks when using LLM to construct a GWM. But as stated in section 2.3, the essence of world optimization is to guide agents in decision-making, while world generation primarily enables agents to perform QA responses and generation, thus, their objectives are different. We distinguish these tasks to better demonstrate the capabilities of GWM in various aspects. Additionally, we have provided detailed descriptions of each task in ***section 2.3, section 5, Table 9, and Tables 13-22*** of the paper. However, following the reviewers' suggestions, we have supplemented the details in Tables 1-7 of [rebuttal.pdf](https://anonymous.4open.science/r/ICML_2025_14809-7B6C/rebuttal.pdf). We will add these details to the next version. **Response to utilization of graph structures:** Thanks for the comment. Both GWM versions fully leverage graph structures for message passing from the perspectives of tokens and embeddings. Furthermore, we verify through the experiment in section 5.2 (No Graph vs N-hop) that graph structures enhance performance on downstream tasks. **Response to Question 1:** Thanks for your question. We didn't include INSTRUCTG2I (with complex QFormer design) as a baseline since our goal was to design a lightweight framework like GWM-E. Following your advice, we compared performance on multi-modal generation tasks as follows. Results show INSTRUCTG2I outperforms GWM on Goodreads but underperforms on Multi-Modal-Paper. This may be because the large-parameter INSTRUCTG2I also requires a larger dataset to achieve good results. In contrast, our lightweight GWM offers better efficiency while adapting well to multiple scenarios. We'll include this discussion in our next version. | Model | Goodreads | | Multi-Modal-Paper | | |-------|-----------|-----------|-------------------|-----------| | | CLIP | DINOv2 | CLIP | DINOv2 | | INSTRUCTG2I | **50.37** | **25.54** | 56.37 | 18.80 | | GWM-T | 47.46 | 20.91 | **59.92** | 23.10 | | GWM-E | 45.23 | 20.87 | 59.84 | **26.03** | **Response to Question 2:** To validate whether similar conclusions to those in [1] can be achieved on GWM, we conducted an ablation study. Specifically, we designed baselines named Text-only, which means training and testing solely on the text modality compared with GWM-E. We made comparisons in planning and optimization (ALFWorld) as follows. It can be observed that removing other multimodal information leads to a performance decline in GWM, which aligns with the conclusions of [1]. We will conduct more ablation studies in the next version to further substantiate this finding. | Method | Precision | Recall | F1 | |--------|-----------|--------|-----| | Text-only | 92.95 | 92.80 | 92.86 | | GWM-E | **97.31** | **96.52** | **96.17** | --- Rebuttal Comment 1.1: Comment: Thanks to the authors' comprehensive feedback! Most of my concerns have been addressed, and I'd like to increase my score to 4. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful and constructive feedback. We are pleased to hear that our responses have addressed most of your concerns. We are committed to incorporating the suggested changes in our revisions to further enhance the manuscript.
Summary: This paper proposes Graph World Model (GWM), a novel framework that integrates graph-structured data and multi-modal information into a unified world model for diverse tasks including prediction, generation, and planning. The authors present two variants (GWM-T and GWM-E) with distinct message-passing strategies and demonstrate competitive performance across six domains compared to domain-specific baselines. While the idea of unifying graph-based reasoning with multi-modal world modeling is promising, the technical depth and methodological clarity need further refinement to fully establish its novelty and scalability. Claims And Evidence: The claims and evidence make sense for the generalization problem. Methods And Evaluation Criteria: The proposed method is reasonable. The evaluation criteria are appropriate for the problem. Theoretical Claims: The detailed theoretical analyses are missing Experimental Designs Or Analyses: The experiments provide strong empirical validations for the proposed method. However, the analyses on the important hyperparameters are weak. Supplementary Material: I checked the supplementary materials; additional details on implementation are helpful. Relation To Broader Scientific Literature: The paper is well-situated within the literature on graph foundation model, multi-modal diffusion and graph-based retrieval-augmented generation Essential References Not Discussed: The important references are fully discussed from my point of view. Other Strengths And Weaknesses: The strengths of this paper are as follows: (1) The integration of graph message-passing into multi-modal world models is a novel contribution, enabling unified handling of structured and unstructured data across prediction, generation, and optimization tasks. (2) The empirical validation highlights that multi-hop aggregation consistently improves performance (e.g., in recommendation and RAG), underscoring the importance of graph structure for complex reasoning. (3) The demonstrated zero-shot/few-shot capabilities on unseen tasks suggest practical value for low-resource scenarios. The weaknesses of this paper are as follows: (1) Ambiguity in Multi-modal Fusion and Aggregation: a) While GWM-T converts multi-modal data into text tokens and GWM-E concatenates CLIP/BERT embeddings (Section 3.1, 4.1), the rationale for avoiding advanced fusion techniques (e.g., cross-modal attention) is unclear. For instance, tokenization in GWM-T may lose fine-grained visual semantics, and simple concatenation in GWM-E risks modality dominance (e.g., text overshadowing images). b) The parameter-free multi-hop aggregation (Section 4.2) adopts a "simplified GCN" design without adaptive mechanisms (e.g., attention in GATv2 or dynamic pruning in GRIT). This limits its ability to handle heterogeneous graphs or dynamically adjust aggregation weights during inference, potentially underutilizing graph structure (e.g., implicit vs. explicit edges in Section 2.2). (2) The paper positions GWM as a "world model" but lacks a formal connection to classical Markovian state transitions (Section 2.1). For example, how do graph-based actions (Section 2.2) align with traditional WM components like transition probability? The framework currently resembles a graph-enhanced multi-modal system rather than a principled extension of world models. (3) The simplified message-passing mechanism lacks theoretical grounding. For instance: a) How does the parameter-free design (Equation 4.2) avoid over-smoothing in deep aggregation? b) Does performance gain primarily stem from graph homophily (e.g., node similarity), and how does GWM handle non-homophilous graphs (common in real-world scenarios)? c) What guarantees exist for the alignment between graph transitions and task objectives (e.g., RAG retrieval vs. graph-level optimization)? Other Comments Or Suggestions: I would suggest that the authors formalize the connection to world models and analyze trade-offs between graph complexity and task performance. Questions For Authors: (1) Why prioritize parameter-free aggregation over learnable mechanisms (e.g., attention)? How does this design impact performance on heterogeneous graphs? (2) How are modality-specific embeddings aligned in GWM-E? Is there empirical evidence of modality dominance (e.g., text overshadowing images)? (3) Can the framework theoretically justify the relationship between graph transitions and task objectives (e.g., RAG retrieval vs. planning)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Response to Weaknesses 1:** Thanks for your questions. We answer your questions one by one: ***[Rationale of cross-modal fusion of GWM.]*** There are many ways to perform multimodal fusion. We selected two representative methods, not to avoid advanced fusion techniques. One is a simple and direct method that unifies multimodal information into text via GWM-T. The other method uses GWM-E, employing a cross-modal projector to unify different modal embeddings efficiently. Moreover, GWM-E does not have a modality dominance issue. We designed baselines named No-image and No-text, which mean training and testing solely on the text modality or image modality compared with GWM-E. We made comparisons on ALFWorld as follows. We can observe that the impact of images and text on GWM-E is almost equivalent, which demonstrates that GWM-E does not have modality dominance and all modalities are important. We will conduct more ablation studies in the next version to further substantiate this finding. | Method | Precision | Recall | F1 | |--------|-----------|--------|-----| | No-image | 92.95 | 92.80 | 92.86 | | No-text | 92.32 | 91.92 | 92.10 | | GWM-E | **97.31** | **96.52** | **96.17** | ***[Rationale of simplified GCN and its adaptiveness to heterogeneous graphs.]*** Indeed, our main experiment is based on homogeneous graphs. Because heterogeneous graphs often rely on predefined node and edge types, but we have multiple tasks and also need the model to generalize to unseen tasks, it is hard to define node and edge types in advance. If heterogeneous type information is represented as tokens or embeddings similar to GWM, ordinary GNNs may solve this, but it would lead to complexity in implementation and be costly. A potential way to extend directly from GWM-E to heterogeneous graphs is to perform separate multi-hop aggregations based on different types of edges and then flatten the resulting node embeddings into sequences to feed into the LLM decoder. However, we believe that our results on six real-world tasks validate GWM’s effectiveness and efficiency, highlighting our contribution. We will leave the exploration of heterogeneous graphs to future work. **Response to Weaknesses 2:** Thanks for your comments. We have already detailed how GWM models state transitions in section 2.2. As described in ***[lines 56-73]***, we use a graph for general multimodal state modeling. Moreover, to address the diversity of action settings in traditional WM, we model actions as action nodes (***[lines 75-88]***) and unify them into text descriptions as in ***[section 3.3 and section 4.3]***. This allows GWM to use a unified model to solve multi-tasks. Lastly, in ***[lines 89-93 of section 2.2]***, we explain that our GWM essentially learns a transition function, which is aligned with principled world models. **Response to Weaknesses 3:** Thanks for your comments. We answer your questions step by step: ***[Avoid over-smoothing.]*** As introduced in ***[lines 247-253]*** of section 4.2, we input the multi-hop embeddings into the decoder. This is equivalent to a skip connection, a standard practice that alleviates oversmoothing [1, 2]. ***[GWM handles non-homophilous graphs.]*** Essentially, both versions of GWM transform graphs into a standard form interpretable by LLMs. Existing work (Liu et al., 2023a; Chen et al., 2024a) has demonstrated that LLMs can also effectively understand graphs, thus we anticipate that LLMs may outperform GNNs in comprehending homophilous or non-homophilous graphs. But if we must understand homophilous graphs from the perspective of GNNs, a potential way is to integrate [3] based on GWM-E, but this would introduce complex implementation and significant costs. Moreover, the performance across six representative real-world tasks has already validated the practicability and generalizability of GWM. We will include the above discussions in the next version. ***[Graph transitions align with task objectives.]*** As discussed in section 2.3, our action is essentially a description of the task objective. For example, in the RAG task, the action of GWM is to answer specific queries with retrieved contexts. Thus, the graph transition modelled by GWM is aligned with the task objective without a theoretical gap. **[1]** Optimization of graph neural networks: Implicit acceleration by skip connections and more depth. **[2]** Representation learning on graphs with jumping knowledge networks. **[3]** Revisiting heterophily for graph neural networks. **Response to missing theory and hyperparameters analysis:** Thanks for the comment. We prioritize empirical analysis of GWM across multiple tasks over theoretical proofs, which are challenging in LLM research. Given the foundation model's numerous hyperparameters and computational costs, we focused on introducing key hyperparameters (lines 357-372 and Appendix B) and analyzed critical ones like hop num (section 5.2), balancing reproducibility with practical costs. --- Rebuttal Comment 1.1: Comment: Thanks for the response. Most of my concerns have been addressed. I will raise the score to weakly accept. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful and constructive feedback. We are glad to know that our responses have addressed most of your concerns. We will carefully incorporate your suggestions in our next version to further improve the quality of the manuscript.
Summary: In this paper, the authors propose Graph World Model (GWM), a framework designed to integrate both unstructured and graph-structured data with multi-modal information. They introduce two GWM variants: a token-based method that transforms multi-modal data into textual representations prior to message passing, and an embedding-based method that operates within a unified embedding space. The framework further incorporates action nodes to address diverse tasks and applies message-passing algorithms to combine structured information. The authors demonstrate competitive or superior performance against domain-specific baselines across six tasks and multiple domains. However, limited experimental comparisons, subpar presentation, and insufficient analysis negatively impact the paper’s overall quality. Claims And Evidence: The authors propose a framework that is clear. Methods And Evaluation Criteria: - The paper bridges world models with graph-based approaches, addressing a clear gap in the literature. This integration allows handling both structured and unstructured data in a unified framework. - The benchmark datasets makes sense. The experimental evaluation across six diverse tasks (multi-modal generation/matching, recommendation, graph prediction, multi-agent collaboration, RAG, and planning/optimization) demonstrates the versatility of the approach. Theoretical Claims: This paper mainly provides the empirical analysis. Experimental Designs Or Analyses: 1. **Limited experimental comparison**. The experiments lack comprehensiveness in several ways. (1) Table 6 fails to clarify whether baselines were fine-tuned on task-specific datasets, making it difficult to assess the true comparative performance. (2) The authors should have included baseline graph world models (Zhang et al. 2021; Zhu et al. ICLR 2023) in their experiments, particularly for world planning and optimization tasks, to properly demonstrate the advantages of their proposed methods. 2. **Scalability and efficiency analysis**. The paper fails to address computational complexity and scalability considerations adequately. For real-world applications involving large graphs, multi-hop message passing could become computationally prohibitive, yet this limitation receives insufficient attention. 3. **Lack of qualitative analysis**. The authors present only quantitative results without providing qualitative comparisons, particularly for world generation tasks. Including actual examples of model outputs would help readers better understand the practical differences between GWM and baselines. Supplementary Material: Yes. The supplementary material adds more on GWM tasks and the details of benchmark. Relation To Broader Scientific Literature: The paper inadequately differentiates its methodology from previous graph world models such as Zhang et al. The authors focus primarily on application differences in the related work section rather than providing a thorough methodological comparison, which would better highlight their novel contributions. Essential References Not Discussed: n/a Other Strengths And Weaknesses: **Other Weaknesses** **Poor presentation affecting readability**. Figure 3, which illustrates the GWM framework, suffers from quality issues that impede understanding. (1) Target nodes are not clearly specified—variables should be directly labeled. Additionally, the target nodes and prompt in the top-right of Figure 3 are two separate inputs but are not represented with distinct arrows. (2) The use of Stable Diffusion in the top-right creates confusion since it was designed for text-to-image generation, while the diagram shows token inputs and "next state" outputs. The authors should clarify that the model operates in Stable Diffusion's latent space. Other Comments Or Suggestions: To enhance the presentation quality, the authors are encouraged to refine the figures and incorporate visualized examples to better illustrate the pipelines. Questions For Authors: Most of the baselines are relatively weak (e.g., Table 6). Can the author supplement the baseline that tuned on the task-specific dataset. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1. Can the author supplement the baseline that tuned on the task-specific dataset (e.g., Table 6).** **Response:** Thanks for the comments. Indeed, we have detailed the specific settings of the baselines for Table 6 in Appendix A.4 (***[lines 779-796]***). We primarily selected three classic agent baselines for comparison: CoT, ToT, and Few-shots. Although these baselines were not additionally fine-tuned on this dataset, the results of GWM shown in Figure 6, as described in ***[lines 382-384]***, were also not additionally fine-tuned. Therefore, from this perspective, the comparison between GWM and the baselines is fair. However, following the reviewer's suggestion, we compared two baselines fine-tuned on AgentClinic: FT is a baseline finetuned on the task using the Llama-3-8B model. Longformer [1] is a strong baseline for long document understanding. The table below reports the comparison results between GWM and these baselines, demonstrating the effectiveness of GWM in AgentClinic. | Method | Acc | Recall | F1 | |------------|-----------|-----------|-----------| | FT | 45.00 | 45.40 | 44.00 | | Longformer | 25.00 | 20.20 | 14.00 | | GWM-T | **50.00** | **46.42** | **48.20** | | GWM-E | 45.00 | 39.57 | 35.56 | ***[1]*** Longformer: The long-document transformer. **Q2. The authors should have included baseline graph world models (Zhang et al. 2021; Zhu et al. ICLR 2023) in their experiments.** **Response:** Thanks for your feedback. Indeed, the works on graph world models (Zhang et al. 2021; Zhu et al. ICLR 2023) do not consider the processing of multimodal information in their modeling and primarily focus on offline scenarios in the RL field. These limitations restrict their capabilities as world models and also make them inapplicable for direct comparison in our scenario. In contrast, GWM is a world model capable of handling multimodal information and generalizing across various tasks. We will supplement these discussions in the related work section of the next version. **Q3. The paper fails to address computational complexity and scalability considerations adequately.** **Response:** Thanks for your valuable comments. In fact, we address computational complexity and scalability from two perspectives. First, we introduce GWM-E, which, as discussed in ***[lines 275-277]***, fixes the LLM's parameters and only fine-tunes the projector parameters, thereby reducing computational complexity. Additionally, as described in section 4.2 ***[lines 234-246]***, we designed a simplified, parameter-free GCN. This not only reduces computational complexity compared to traditional GNNs (Wu et al., 2019; He et al., 2020a) but also benefits from PyTorch's sparse matrix multiplication acceleration, allowing it to scale more effectively to large graphs. As shown in ***Tables 10 and 11*** in the appendix, we have conducted experiments with large graphs at the scale of hundreds of thousands, demonstrating GWM's ability to address computational complexity and scalability. **Q4. The authors present only quantitative results without providing qualitative comparisons.** **Response:** Thanks for the insightful advice. We have already added qualitative comparisons for all tasks in Tables 1-7 of [rebuttal.pdf](https://anonymous.4open.science/r/ICML_2025_14809-7B6C/rebuttal.pdf). We will complete this information in the appendix of the next version. **Q5. Figure 3, which illustrates the GWM framework, suffers from quality issues that impede understanding. (1) Target nodes are not clearly specified—variables should be directly labeled. Additionally, the target nodes and prompt in the top-right of Figure 3 are two separate inputs but are not represented with distinct arrows. (2) The use of Stable Diffusion in the top-right creates confusion since it was designed for text-to-image generation, while the diagram shows token inputs and "next state" outputs. The authors should clarify that the model operates in Stable Diffusion's latent space.** **Response:** Thanks for your valuable suggestions. Following your suggestions, we first specified the target nodes in Figure 3. Next, regarding the issue with the top right arrow, we intended to prompt the target nodes, so we changed the content on the arrow to 'Prompt (Target nodes)'. Lastly, concerning the input and output issues of Stable Diffusion in GWM-T, as described in ***[lines 229-234]***, our input included action nodes and target nodes at the token level, and the output image served as the next state. To further clarify this process, we added legends to distinguish between different colors and shapes. We have updated the modified figure in Figure 2 of [rebuttal.pdf](https://anonymous.4open.science/r/ICML_2025_14809-7B6C/rebuttal.pdf) and it will be updated in the next version.
Summary: This paper proposes a Graph World Model that supports both unstructured and graph-structured states with multi-modal data. The proposed model can tackle diverse sets of tasks and act as a graph-based foundation model. The results on numerous datasets and tasks show SOTA or comparable results on most tasks compared to the baselines. Claims And Evidence: - The paper claims to propose a graph world model that can tackle multidomain tasks and be applied to both structured and unstructured data. - The claim is backed by numerous experiments and empirical results on multidomain and multimodal datasets with different combinations of structured (graphs) and unstructured (text, image). - The claims are clear and the results are convincing. Methods And Evaluation Criteria: Yes, the proposed method claims to act as a foundation model, and the results on 11 datasets compared to the baselines on those datasets show better or comparable results. Theoretical Claims: N/A Experimental Designs Or Analyses: The designed experiments are sound and valid and show the analysis of the proposed model in different tasks, datasets, and scenarios. Supplementary Material: Yes. - The details of datasets and baselines - The hyperparameters - The used prompts Relation To Broader Scientific Literature: The paper seems to be novel to the best of my knowledge in terms of using a graph-based world model that considers states and actions as graphs and enables the use and combination of multimodal structured and unstructured data, which is valuable. The method, although different in nature, solves a similar task to LANISTR [a], which would be nice to cite and contrast against. Essential References Not Discussed: Although the proposed method in [a] is different from the one here, they could potentially solve similar tasks. It would be nice to contrast or compare against it. [a] Ebrahimi, Sayna, et al. "LANISTR: Multimodal Learning from Structured and Unstructured Data." CoRR (2023). Other Strengths And Weaknesses: Strengths: 1. The paper is very well-written, and it was a delight to read. 2. The contributions are clear, and the claims are backed by the experiments. 3. The model is extensively evaluated on multiple tasks and datasets and shows SOTA / comparable results in most scenarios. Weaknesses: 1. It would be nice to include the SOTA results for the different tasks. Although for most tasks, the baselines are recent, in Tab. 4, the provided baselines are old. Maybe the authors can include FREEDOM [e] or another recent SOTA model for recommendation. [e] Zhou, Xin, and Zhiqi Shen. "A tale of two graphs: Freezing and denoising graph structures for multimodal recommendation." Proceedings of the 31st ACM International Conference on Multimedia. 2023., https://dl.acm.org/doi/pdf/10.1145/3581783.3611943 Other Comments Or Suggestions: 1. The graphics can be improved to make the method clearer and more understandable. Eg. in Figs. 1,3 it is not clear what each color identifies in Fig. 1: what is the difference between intended and unintended action, and what is the difference between the node/edge/graph level actions? Questions For Authors: Could the authors contrast their work against LANISTR [a] for general multimodal learning and, if possible, compare to FREEDOM [e]? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1. The method, although different in nature, solves a similar task to LANISTR [a], which would be nice to cite and contrast against.** **Response:** Thanks for the valuable suggestions. Indeed, we have already compared two baselines that, like LANISTR, were pre-trained by modality alignment and then fine-tuned on downstream tasks, Contrastive MLP (Liu et al., 2022), CLIP FT (Radford et al., 2021) in ***[lines 348-352] and [Table 3]***. However, we followed the reviewer's suggestions and compared the performance of GWM with LANISTR. Due to the limited time for rebuttal, we primarily focused on comparing the performance of GWM and LANISTR when directly finetuned on the Goodreads dataset in the task of multi-modal matching. It can be observed that GWM achieved a significant performance gain compared to LANISTR. In the next version, we will compare LANISTR in more tasks and evaluate its performance after pretraining on our dataset. | Model | Accuracy | Recall | F1 Score | |-------|-----------|---------|----------| | LANISTR | 54.3 | 51.8 | 43.6 | | GWM-T | 84.22 | 85.66 | 85.29 | | GWM-E | **88.82** | **89.73** | **89.06** | **Q2. In Tab. 4, the provided baselines are old. Maybe the authors can include FREEDOM [e] or another recent SOTA model for recommendation.** **Response:** Thank you for your insightful comments. In the recommendation task, we primarily selected some classic and representative baselines for comparison. Our goal in comparison with task-specific baselines is not to demonstrate that GWM can beat sota in all tasks, but to show that GWM, with a unified model, has good generalization capabilities across many tasks, including recommendation. However, we followed the reviewer's suggestions and compared the performance of GWM with FREEDOM on the recommendation task shown in the following table. We can observe that, compared to FREEDOM, the two methods of GWM achieved better performance, which demonstrates the effectiveness of GWM. We will update the experimental results and discussion in the next version. | Model | Baby | | Sports | | Clothing | | |-------|------|---------|--------|---------|----------|---------| | | Recall | F1 Score | Recall | F1 Score | Recall | F1 Score | | FREEDOM | 60.35 | 66.16 | 63.47 | 70.53 | 70.20 | 78.40 | | GWM-T | 70.84 | 75.08 | 84.29 | 88.60 | 71.73 | 74.26 | | GWM-E | **76.72** | **84.74** | **88.78** | **90.32** | **75.27** | **84.06** | **Q3. The graphics can be improved to make the method clearer and more understandable. Eg. in Figs. 1,3 it is not clear what each color identifies in Fig. 1: what is the difference between intended and unintended action, and what is the difference between the node/edge/graph level actions?** **Response:** Thanks for your valuable advice. Following your suggestions, we have separated the descriptions of different action levels and the methods for obtaining target nodes in Figure 1. Additionally, we have standardized the colors in corresponding parts of Figures 1 and 3. Lastly, we have added captions to each figure to differentiate the meanings of various colors and shapes. We showed the figures in Figure 1 and 2 of [rebuttal.pdf](https://anonymous.4open.science/r/ICML_2025_14809-7B6C/rebuttal.pdf). We will update these modifications in the next version.
null
null
null
null
null
null
Ultra Lowrate Image Compression with Semantic Residual Coding and Compression-aware Diffusion
Accept (poster)
Summary: The paper introduces an ultra-low rate image compression method combining Semantic Residual Coding and a Compression-aware Diffusion Model. SRC efficiently encodes semantic differences between original and compressed latent images, minimizing redundancy, while CDM aligns diffusion steps with compression levels to improve reconstruction fidelity. Experiments demonstrate that ResULIC surpasses existing diffusion and GAN-based methods, achieving up to 80.7% and 66.3% BD-rate savings in LPIPS and FID metrics, respectively, particularly excelling at ultra-low bitrates. Claims And Evidence: Most claims are supported by experimental evidence regarding Semantic Residual Coding and the Compression-aware Diffusion Model, both validated by quantitative and qualitative analyses. However, the Pfo module increases encoding time (~175s/image), raising practical concerns not fully addressed. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriately designed for ultra-low bitrate image compression. However, the practicality of the computationally expensive Perceptual Fidelity Optimization module (~175s/image) warrants further clarification. Theoretical Claims: The theoretical claims and proofs look correct and are clearly presented. However, I did not look into the details very much. Experimental Designs Or Analyses: Yes, the experimental design and analyses are rigorous, with metrics and ablations. Minor issues include reliance on unofficial baseline implementations (e.g., PerCo) and insufficient clarity on differences from originals. Supplementary Material: I reviewed the supplementary material, covering additional evaluations on multiple datasets (Kodak, MS-COCO, DIV2K, Tecnick), implementation details, complexity analyses, and extended visual comparisons. Relation To Broader Scientific Literature: The paper combines diffusion-based generative models (PerCo, DiffEIC), multimodal semantic retrieval (MLLMs), and residual coding into a unified compression framework. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - combination of existing ideas - clearly presented and significant Weaknesses: - high computational cost Other Comments Or Suggestions: N/A Questions For Authors: - Given the high computational cost of PFO, can you clarify practical use cases? - Could differences between your reproduction and the official PerCo implementation affect your comparisons? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your kind recognition and these insightful questions—they are indeed crucial for understanding our work. Q1: **Regarding the computational cost of PFO and practical use cases** A1: For PFO, our goal is to demonstrate the potential of prompt optimization in further improving the overall fidelity, even though it is currently computationally intensive. Additionally, **the PFO introduces no additional latency during decoding**, making it acceptable for performance-critical scenarios, such as cloud-based restoration services where computational resources are readily available and the encoding latency is not strictly constrained. However, even without PFO, our method still achieves superior performance compared to existing approaches (as demonstrated in Table 1). This underscores the effectiveness of our core framework. Furthermore, we are actively working on optimizing PFO's efficiency to facilitate its wider practical deployment in future applications. Q2: **Reproduction vs. official PerCo implementation** A2: Since the official PerCo code is not publicly available, we mainly compared with a publicly avaiable version. This reproduced version provides some drop in metrics like LPIPS/MS-SSIM, but better at FID/KID and visual quality. Notbaly, to ensure a fair comparison with both versions: We have also benchmarked against PerCo’s official objective results (in Table 9) and subjective comparisons (in Fig. 20) in Appendix D. We also list the table below. Our method also consistently outperforms PerCo (offical), even when comparing with its strongest metrics like PSNR/LPIPS/MS-SSIM. |||BD-Rate(%)↓||| |:-:|:-:|:-:|:-:|:-:| |Method|LPIPS|PSNR|MS-SSIM| |PerCo(offical)|0|0|0| |PerCo(SD2.1)|21.7|88.3|15.4| |**ResULIC W/o Pfo**|-34.5|-54.7|-33.4| |**ResULIC**|-41.5|-52.0|-32.7|
Summary: This paper introduces ResULIC, a novel framework for ultra-low-bitrate image compression that integrates semantic residual coding (SRC) and a compression-aware diffusion model (CDM). SRC is proposed to capture the semantic disparity between the original image and its compressed latent representation. CDM is used to establish a relationship between diffusion time steps and compression ratios, improving reconstruction fidelity and efficiency. Compared to the state-of-the-art diffusion-based compression method PerCo, ResULIC achieves superior rate-distortion performance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Limited real-world testing or user studies. While objective metrics like LPIPS and FID are useful, human perceptual studies (e.g., Mean Opinion Score) would provide additional validation of realism. This is not a major issue, but including user evaluations would further strengthen the claims. Supplementary Material: Lack of the implementation details of baselines. Please clarify whether the competing baselines use pretrained models or is trained from scratch. Relation To Broader Scientific Literature: 1. ResULIC builds directly on recent research in learned image compression (Balle et al. (2017, 2018), HiFiC (Mentzer et al., 2020)) , multimodal generative models (PerCo (Careil et al., 2024)), and diffusion-based restoration (ResShift (Yue et al., 2024), RDDM (Liu et al., 2024)). 2. It differentiates itself by combining residual-based multimodal compression with diffusion-aware bitrate modeling, which has not been explicitly addressed before. 3. Its evaluation approach is aligned with state-of-the-art perceptual fidelity methods, reinforcing its relevance in the field. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The idea of using MLLM-based retrieval to encode only missing semantic details rather than full textual descriptions is an innovative and practical refinement. 2. CDM dynamically adjusts diffusion steps based on compression levels, improving efficiency without needing extensive modifications to pre-trained models. 3. Strong experimental results and relevance to ultra-low bitrate applications. 4. Comprehensive experiments and ablation studies. Weaknesses: 1. Why was Stable Diffusion v2.1 specifically chosen as the backbone? If the latest SOTA diffusion model FLUX is used, is possible to further improve the compression performance. 2. Limited real-world user studies: While ResULIC shows strong performance in perceptual metrics (FID, LPIPS), there are no human perceptual studies. Real-world testing on actual user preference rankings could validate if the perceived quality aligns with objective metrics. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s insightful comments and recognition of our work’s potential. The questions raised are highly valuable for improving the robustness and impact of our research. Q1: **Implementation details about compared methods** A1: Thank you for your reminder. To ensure a fair comparison, we evaluated all methods as follows: For open-sourced approaches (e.g., DiffEIC, PerCo, CDC, MS-ILLM, Text-Sketch), we used their publicly available pretrained models. For non-open-sourced methods, we relied on the official results reported in their respective publications (e.g GLC, CorrDiff). Additionally, since the open-sourced HiFiC model operates at a higher bitrate, our data was sourced from the reproduced version by the DiffEIC's author. For PerCo, the performance of the open-sourced version is slightly different with the offical results, so we provided comparison with both version. Q2: **Choice of Stable Diffusion v2.1 as Backbone** A2: Thank you for raising this insightful question. It is possible and in line with expectations that larger and better diffusion models can have improved performance. The choice of Stable Diffusion v2.1 as the backbone was primarily driven by two key considerations: - Fair Comparison with Existing Works: Current state-of-the-art methods (e.g., PerCo, DiffEIC) adopt SD v2.1 as their base model. To ensure a direct and equitable evaluation of our method’s performance improvements, we maintained consistency in the backbone architecture. - Complexity-Performance Balance: While advanced models like FLUX could potentially enhance compression performance, their large parameter size (12B) and high GPU memory usage make them impractical for real-world applications for image compression. Therefore, we opted to use smaller models to validate the reliability of our method. We fully acknowledge that leveraging newer diffusion models is a promising direction for further improvements. In fact, we are actively working to secure additional computational resources to explore such extensions in future research. At the same time, we are also investigating how to better utilize the quantized version of FLUX to enhance our performance. Your suggestion aligns perfectly with our roadmap for advancing this work. Q3: **User Studies and Perceptual Validation** A3: To fully demonstrate the comparison of subjective quality, we have included some example images in the main text and appendix. However, we fully acknowledge that user studies are indispensable for comprehensive validation. After concentrated testing efforts, we're pleased to share some initial subjective user study results(https://anonymous.4open.science/api/repo/public-E45F/file/user_study.png?v=0cb25072). Due to time constraints, we conducted a preliminary evaluation at two bitrate points with 20 participants. Each participant assessed randomly selected images from the DIV2K test set (30 images per bitrate point). The final results were calculated as the average of all votes. Subjective tests confirm our method's clear visual advantage over baselines. We will keep working on providng a more comprehensive user studies. Thank you again for the constructive feedback—we will incorporate these suggestions to strengthen our research further.
Summary: The present paper describes a perception-oriented image compression framework that utilizes the residual multi-modal semantics (w.r.t what’s could be recovered by the latent decoder) as guidance conditioning for the diffusion denoising process towards improved fidelity at lower rate costs. The insight that fidelity improves with increased number of denoising steps, for a given bitrate, the paper also describe a variable compression level method with improved flexibility. The method differs from prior MLLM-based image compression methods by accounting for the semantic redundancy between the image latent representations and multi-modal embeddings, thus yielding improved rate-perception trade-off. Claims And Evidence: The empirical evidence and ablative analysis are comprehensive and could adequately support the claims. It makes sense to steer the denoising diffusion process with only semantic residuals and the experimental results look promising. Methods And Evaluation Criteria: The proposed methods and evaluation criteria, for the most part, make sense for the problem at hand. The only issue is, I wonder what is the difference between conditioning the diffusion model directly with visual cues instead of explicit texts? The semantic residual is expected to be of high-frequencies whose encapsulated information may not always be `conveyable' via textual descriptions. Would there be information loss during semantic residual retrieval? Also, it could be seen from Figure 5 in the main text and, for instance, Figure 23 in the supplementary materials, that the textual prompts could contain gibberish that could not be comprehended by human. Why would such prompts be beneficial to guiding the denoising process? Theoretical Claims: I have checked the correctness of the proofs. They look sensible to me. Experimental Designs Or Analyses: I have checked the soundness of experimental designs and analyses in the main text as well as the supplementary materials. Please refer to **Methods and Evaluation Criteria** section for my questions. Supplementary Material: I have reviewed the B. Mathematical Framework, C. Experiment Details, and D. More Results sections. Relation To Broader Scientific Literature: They are related to foundational model-based generation, in particular conditional diffusion models and multi-modal vision language models. Essential References Not Discussed: No essential references missing to the best of my knowledge. Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: = I think Figure 5 is not properly referred to in the main text. Questions For Authors: Please refer to **Methods and Evaluation Criteria** section for my questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Q1: **What is the difference between conditioning the diffusion model directly with visual cues instead of explicit texts** A1: In our method, explicit texts primarily compensate for semantic gaps in the visual condition, especially at ultra-low bitrates. For example: - **Low-bitrate regime**: In extreme low-bitrate scenarios where compressed features may retain only basic structural information (e.g., object locations and edges), relying solely on these limited visual cues can lead to vital semantic mismatch (e.g cat to dog). Our method adds text descriptions to fill in missing details, bridging the semantic gap from heavy compression. The subjective results can be viewed at: https://anonymous.4open.science/api/repo/public-2D3D/file/test2.jpg?v=1ef61402 - **High-bitrate regime**: As the bitrate increases, the visual condition retains sufficient information, and the reliance on text gradually diminishes. This is empirically validated in Figures 9,10 of our paper, where the residual text length drops to zero at higher bitrates. Thus, visual conditions and texts are synergistic: the former anchors structural fidelity, while the latter contains compact high-level semantics (visual cues contain richer and preciser information, while texts are more compact). Our hybrid solution can utilize both sources to achieve overall improved performance. |w/o text||w/ text|| |:-:|:-:|:-:|:-:| |Bpp|LPIPS(↓)|Bpp|LPIPS(↓)| |0.002163|**0.496696**|0.002681|**0.468738**| |0.004901|**0.403217**|0.005481|**0.386965**| |0.021926|**0.205847**|0.022849|**0.232723**| |0.062561|**0.135126**|0.063262|**0.123806**| Q2: **Would there be information loss during semantic residual retrieval** A2: During the process of semantic residual retrieval, what we extract is not directly the semantic content of residual features, but rather, as illustrated in Figure 6, we use MLLM to extract the residual information between the original image $x$'s caption and caption of the compressed image $x'$: $caption_{res}=MLLM(x)-MLLM(x')$, instead of $MLLM(x-x')$ . The table below shows the results of semantic residual retrieval for different models (GPT-4o, Llama-3.2-11B) in Kodak. Therefore, the semantic residual in the form the text is usually conveyable as long as the text description matches with the input image. In terms of text-level information loss, it mainly depends on the capability of the underlying MLLM models as shown here in the table. |w/o Srr||W/ Srr(GPT-4o)||W/ Srr(Llama-3.2-11B)|| |:-:|:-:|:-:|:-:|:-:|:-:| |Bpp|LPIPS(↓)|Bpp|LPIPS(↓)|Bpp|LPIPS(↓)| |0.005189|**0.476899**|0.00372|**0.468738**|0.003737|**0.493859**| |0.008308|**0.386744**|0.006777|**0.386965**|0.006969|**0.390045**| |0.033089|**0.233531**|0.031589|**0.232723**|0.031614|**0.233985**| From the table, we can observe that since GPT-4o possesses stronger capabilities, the LPIPS between reconstructions after SRR and those from the original captions remains nearly identical. In contrast, Llama-3.2-11B shows some degradation, suggesting potential loss of certain keywords in the process. However, as you have rightly considered, the semantic residual represented in texts may not perfectly align with the pixel-level residual signal. Hence, we proposed the PFO module to compensate for the feature information that natural language cannot describe. Please correct us if we misunderstand the question you raised. Q3: **Why would such “gibberish” prompts be beneficial to guiding the denoising process** A3: Thank you for raising this insightful question. Regarding this issue, we provide the following clarification of the PFO pipeline: A word is first converted into an embedded token space, and the PFO performs search in the token space through gradient backpropagation guided by output image reconstruction quality. The token vector is iterally updated and finally converted back as a new word. Semantic control in diffusion models operates through CLIP's text embedding space rather than surface-level language. PFO performs gradient-based search in this latent token space, optimized solely for image reconstruction quality without explicit linguistic constraints. Therefore, the PFO may break the grammar and readability of the word. Consequently, the generated captions may not always exhibit high readability (though still retaining interpretable tokens such as "DBZ (Dragon Ball Z)," "pagoda," and "temples," as shown in Figure 23). However, they can achieve higher semantic alignment with the original image in CLIP's embedding space. The high-dimension token space encapsulates information beyond the expressive capacity of human-readable language, thereby enabling more effective guidance. Q4: **Figure 5 is not properly referred to in the main text** A4: Thank you for your reminder. The principle of Figure 5 is indeed consistent with our response to **Q3**, and we will subsequently add a detailed explanation in the main text.
Summary: This paper proposes a diffusion model-based image compression method. First, a pretrained codec is used to obtain a latent representation, which guides the generation process of the diffusion model. Second, semantic information is introduced as additional guidance. To reduce the overhead of transmitting semantic information, only the residual between the ground truth and the decoded image’s semantics is transmitted. Experimental results show that on CLIC20, the proposed method achieves a lower FID compared to other approaches. For this paper, I cannot provide an evaluation of the novelty of the diffusion model part, as I am not very familiar with diffusion models. My main concern lies in the model's complexity and its comparison with GLC. Claims And Evidence: Yes Methods And Evaluation Criteria: As a perceptual quality-optimized image compression method, it is reasonable for the authors to use FID, LPIPS, and DISTS as evaluation metrics. Theoretical Claims: I am not very familiar with diffusion models, but I reviewed the proof as thoroughly as possible. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. I primarily reviewed the code in the supplementary materials. Relation To Broader Scientific Literature: Researchers working on integrating diffusion models into image compression may find this paper interesting. Essential References Not Discussed: I believe this paper provides a sufficient discussion of related work but lacks coverage of distortion-oriented image compression methods. Other Strengths And Weaknesses: This paper achieves strong performance in terms of FID. Compared to other diffusion-based methods, it requires fewer iteration steps, resulting in a faster decoding speed. However, I believe its decoding speed is still slower than GAN-based methods and comes with higher computational complexity. According to Figure 15, its LPIPS performance is not as good as the GAN-based GLC, yet it comes with higher complexity. For evaluation on the CLIC2020, DIV2K, and Tecnick datasets, the authors followed the approach of CDC (Yang & Mandt, 2022) by resizing images to a short side of 768 and then center-cropping them to 768×768. I wonder if the authors could provide results for full-resolution inference, as I am curious about the memory overhead of the diffusion model during full-resolution inference. In the GLC paper, the FID evaluation method is consistent with that of HiFiC. The authors should compare their method with GLC. I am generally leaning towards borderline. Other Comments Or Suggestions: line 1033: which takes approximately **175s minutes** for a Kodak Questions For Authors: How to understand the result in Table 4 where fewer steps lead to better performance? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. Regarding your inquiry, we would like to provide the following clarification: Q1: **Comparing with GAN-based GLC** A1: We highlight our advantage over GLC as below: 1. **Ultra-low Rate Support**: GLC is a very competitive and representative GAN-based compression. However, similar to other GAN-based approaches (e.g., MS-ILLM, HiFiC), it faces challenges at ultra-low bitrates. Although GLC achieved bitrates below 0.01 bpp on datasets like CelebAHQ, such performance requires task-specific retraining for facial images, limiting its generalizability. For general natura images (e.g. Kodak, CLIC), GLC's minimum achievable bitrate is approximately 0.02 bpp, whereas our method remains effective even at an ultra-low bitrate of 0.0026 bpp. **As the bitrate decreases further, the effectiveness of feature-only optimization gradually diminishes.** 2. **Simplified and Stable Optimization**: GLC employs GAN and a complex 3-stage training for refining compressed features. In contrast, our approach achieves superior visual quality and competitive objective results currently with simple ***MSE-optimized latent compressor***, and mainly focusing on how diffusion models can effectively enhance these features. From the perspective of feature compression, GAN based GLC, diffusion based PerCO, and our method share similar principle—using a pretrained VQGAN/VAE to perform image compression the latent features. However, the optimization strategies diverge. In essence, GAN-based and Diffusion-based compression frameworks are two competitve pathes. These two approaches are highly complementary and can mutually reinforce each other. Combining GLC's feature refinement with diffusion recovery could further advance ultra low-bitrate image compression performance. Q2: **Full resolution Memory Cosumption** A2: We fully understand this concern and would like to provide the following clarifications: 1. **Test Setting**: As mentioned by the reviewer, our current test setting is to align with existing diffusion-based methods such as CDC and DiffEIC for fair comparison. 2. **Memory Benchmark**: On full-resolution CLIC images (2048×1365): - while GLC's closed-source nature prevented direct evaluation, its backbone VQGAN's peak memory usage is 11.2GB (w/o compression module of GLC) - Our method's peak usage is 17.8GB (at the Diffusion's VAE decoder). This marginal increase in memory usage represents a reasonable tradeoff. Considering our method's unique capability of achieving ultra-low bitrate compression and the potential of diffusion-based frameworks, we believe they hold promise for substantial advancements in both performance and efficiency. Q3: **Full resolution results and Comparing with GLC about FID** A3: As mentioned, our method demonstrates more promising performance at ultra low rates (<0.01bpp). GLC’s paper evaluates only a narrow bitrate range (0.02-0.03bpp), we matched this range for fair comparison on CLIC and DIV2K (full resolution) and MS-COCO 30K (256×256, following GLC’s setup). The results can be seen at https://anonymous.4open.science/api/repo/public-5C7B/file/comparison.png?v=f3d457fd. As can be seen from the figure, our method still provides better or competitive results over GLC for full resolution tests. Q4: **Question about Table 4: Fewer steps but better performance** A4: In Table 4, we provided two baselines for comparison. One is a strategy ANS used in previous works, the other is our ResULIC without the CDM strategy. The results showed that with **CDM** and our **Predicted-$x_0$ training strategy**, better performance can be achieved with fewer steps. More explanation of these two modules: **CDM**: CDM accelerates denoising by initiating diffusion from a distribution conditioned on compressed feature means $z_c$ rather than pure noise (Eq.8). This intermediate starting point reduces sampling steps while enhancing fidelity. **Predicted-$x_0$ Strategy**: Our sampling process (Algorithm 2 in the Appendix) suffers from error accumulation when deriving $z_0$ from noise predictions. The training strategy predicts $x_0$ to reduce error accumulation from noise prediction, improving accuracy and enabling high-fidelity reconstruction with fewer steps. Additional experiments are provided in the table to demonstrate reduced decoding delay and improved performance(In "Denoising Step", the numbers represents sampling steps (50,20,4) for <0.05bpp and (20,10,3) for ≥0.05bpp): ||||||BD-Rate(%)↓||| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Method|w/predicted-x_0 training|w/CDM|Denoising Steps|LPIPS|DISTS|PSNR|MS-SSIM| |ANS|×|×|50/20|0|0|0|0| ||×|×|50/20|4.6|5.3|6.8|47.1| ||×|√|20/10|-29.3|-35.4|-19.5|-10.6| |**ResULIC**|√|×|20/10|-25.8|-29.5|-25.3|-16.7| ||√|√|20/10|-58.4|-66.8|-48.7|-22.9| ||√|√|4/3|-54.4|-65.8|-49.7|-23.9| We appreciate your insights and hope this clarifies the trade-offs and unique contributions of our work.
null
null
null
null
null
null
Interpolating Neural Network-Tensor Decomposition (INN-TD): a scalable and interpretable approach for large-scale physics-based problems
Accept (poster)
Summary: This paper presents Interpolation Neural Network-Tensor Decomposition (INN-TD), a framework combining neural network approximation of PDE solutions with tensor decomposition methods. By incorporating locally supported interpolation functions, the author claims that INN-TD enhances accuracy, speeds up training and solving, and reduces memory usage. The authors validates the claim by performing the experiments on high-dimensional parametric PDEs in training, solving, and inverse optimization tasks. ## update after rebuttal I have updated my score based on the author's reply to my concerns. While I'm nearly satisfied with the quality of the research, I strongly hope that the interpretability in the original manuscript is updated based on my rebuttal discussion. Claims And Evidence: The claims presented in the paper are largely supported by the experiments and theoretical considerations. However, I believe that additional efforts are necessary to fully validate them for the following reasons: 1. In all the experiments, training was conducted only once. As a result, the reported strong performance could be attributed to favorable initial weights and a fortunate stochastic optimization process. It is strongly recommended to conduct multiple trials (e.g., three to ten) and report the average performance along with the standard deviation to assess statistical robustness. 2. The experiments were conducted solely on a scalar equation, without considering real-world PDE systems such as the Navier-Stokes (NS) equations. This is a critical limitation, as Physics-Informed Neural Networks (PINNs) often perform well on simple scalar PDEs but struggle with NS turbulence. Evaluating the proposed method on a complex real-world PDE would strengthen its validity. 3. Although interpretability is highlighted as a key advantage of the proposed method, no substantive discussion is provided on this aspect. Specifically, it remains unclear how the authors would interpret cases where INN-TD fails to approximate PDE solutions. Simply attributing failure to the optimization process of the interpolation neural network does not offer meaningful insights from an interpretability standpoint. A more detailed discussion on this issue is necessary. Methods And Evaluation Criteria: The Benchmark dataset can include more challenging tasks, such as compressible NS equations (PDEBench) and various hydrodynamic simulations (The Well). [1] Takamoto, Makoto, et al. "Pdebench: An extensive benchmark for scientific machine learning." Advances in Neural Information Processing Systems 35 (2022): 1596-1611. [2] Ohana, Ruben, et al. "The well: a large-scale collection of diverse physics simulations for machine learning." Advances in Neural Information Processing Systems 37 (2024): 44989-45037. Theoretical Claims: n/a Experimental Designs Or Analyses: 1. As mentioned above, it is essential to report the statistically averaged performance along with the standard deviation to ensure robustness. 2. The authors should provide more details regarding the train/validation/test dataset split. Specifically, is the test set drawn from an in-domain distribution or an out-of-domain distribution? Clarifying this distinction is important for evaluating the generalizability of the model. 3. In Tables 2 and 3, it would be preferable to maintain consistent conditions, such as the number of model parameters, to facilitate a fair comparison of performance. In the current version, the imbalance in experimental conditions makes it difficult to determine whether INN-TD genuinely outperforms other methods. 4. The paper does not discuss the well-known failure modes of PINNs, as reported in [3,4]. To demonstrate the practical usefulness of the proposed method, it would be valuable to assess whether INN-TD can mitigate these optimization challenges without requiring special modifications, as extensively discussed in the PINN literature. [3] Wang+ (2021), SIAM J. Sci. Comput., 43 (5): A3055. [4] Wang+ (2022), arXiv:2203.07404. Supplementary Material: I have glanced at all the part but only quickly. Relation To Broader Scientific Literature: The proposed method can be applied any existing PINN method. Potentially, it could also be used as a neural operators, though not fully explored in the paper. Essential References Not Discussed: As the references discussing PINN optimization issues, [3], [4] are missed ([4] is cited but no mentions on the optimization issue). [3] Wang+ (2021), SIAM J. Sci. Comput., 43 (5): A3055. [4] Wang+ (2022), arXiv:2203.07404. Other Strengths And Weaknesses: Strength: * As long as shown in the paper's experiments, INN-TD allows simulating PDE solution much better accuracy than the other existing NN functions, with much shorter inference time. Weakness: * Its extrapolation ability in out-of-distribution data points is unknown. Other Comments Or Suggestions: The standard deviation in Table 4 is very large. The number of the trials should be increased to reduce the fluctuation. Questions For Authors: What is the definition of the color and axes of the object in Figure 9? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's valuable comments. > Running statistics and comparison We have added the statistics for all training examples in *[Table 1](https://anonymous.4open.science/r/i7_/t0.png)*. The statistics for the solving examples are summarized in *[Table 2](https://anonymous.4open.science/r/i7_/t1.png)* and *[Table 3](https://anonymous.4open.science/r/i7_/t2.png)*. For the solving tasks, it's important to mention that INN-TD is not based on the collocation method, of which convergence cannot be generally guaranteed. Instead, INN-TD leverages the merits from the finite element method (FEM) and uses the Galerkin weak form to compute the PDE loss in the integral form. The resulting systems of equations are sparse and can be solved using matrix solvers instead of relying on stochastic optimization. Thus, INN-TD has guaranteed convergence when solving PDEs and much faster solving speed compared to others (even if the number of parameters is larger than other models). The statistics of error, computation time, and GPU VRAM usage are plotted in *[Fig 1](https://anonymous.4open.science/r/i7_/p1.png)*. It can be seen there is no fluctuation in the error, although the initial guess of the solution is initialized randomly using normal distributions. > Only scalar PDE examples We have added 1 example of solving the elasticity equation in solid mechanics, which is a vector PDE. The results are in newly added *[Fig 2](https://anonymous.4open.science/r/i7_/e1.png)*, where we can control the accuracy of the model by choosing different $s$ and solve large-scale problems FEM cannot handle. Solving NS equations is the next step. It's worth noting that NS equations have been solved using tensor decomposition-based methods such as in *(Fernandes, Jeferson et al, CMAME 386(7):114102)*. Consequently, we are confident that INN-TD can be effectively extended to solve NS equations. > Discussion on the interpretability Model interpretability generally refers to the ability to explain a model’s behavior in a way that is understandable to humans. INN-TD is an interpretable machine learning algorithm with a transparent structure, clear convergence guarantees, and explainability of outcomes. (1) INN-TD’s learning parameters are interpretable TD components of nodal values in each mode. (2) INN-TD has a well-defined convergence theory: as the number of elements and modes increases, the error decreases. We added [Fig. 3](https://anonymous.4open.science/r/i7_/p1.png) to show its convergence behavior. (3) INN-TD’s results are explainable—high or low accuracy in specific regions can be attributed to the quality of segmentation. A newly added [Fig. 2](https://anonymous.4open.science/r/i7_/i1.png) shows how we improve accuracy by refining the grid near the nonlinear solution region when solving the Poisson equation. > Train/validation/test split and generalization for training tasks For the training task, We sampled two parameters $(k, P)$ six times each, generating 36 time-dependent simulations. Seven cases (20%) were randomly chosen as the test set, including unseen parameter combinations. The remaining 29 cases were shuffled across space $(x, y)$, time $(t)$, and parameters $(k, P)$, then split into training and validation sets. This setup ensures the test set represents an out-of-domain distribution relative to the model’s exposure. > Performance of INN-TD for cases where PINN fails From *(Wang+ (2021), SIAM J. Sci. Comput., 43 (5): A3055.)*, the vanilla PINN exhibits failure modes when solving Helmholtz equation. Here we use INN-TD to solve the exact same problem. The results are shown in newly added *[Fig 4](https://anonymous.4open.science/r/i7_/h1.png)*. INN-TD can solve this equation very fast and accurately with guaranteed convergence rates. Moreover, we have also added these essential references as suggested by the reviewer. > Operator learning using INN-TD Thanks for the suggestion! We have added a new example of using INN-TD to approximate PDE operators. The goal is to approximate a PDE operator without generating any data. Details can be found in newly added *[Fig 5](https://anonymous.4open.science/r/i7_/o1.png)*, where we successfully approximate the PDE operator mapping from the conductivity field $k(x)$ to the solution $u(x,t)$. We use Karhunen-Loève(KL) expansion together with C-HiDeNN basis functions to approximate arbitrary function $k(x)$. INN-TD can solve the problem very fast thanks to the benefits of TD for high-dimensional problems. We have included this problem in the appendix. > Large standard deviation in Table 4 We have tried 200 more runs, and the standard deviation has significantly decreased, as shown in the updated *[table](https://anonymous.4open.science/r/i7_/t.png)*. > Definition of the color and axes in Fig 9. Thanks for the question! The contour plot is the solution field. We have updated the figure as shown in *[Fig 6](https://anonymous.4open.science/r/i7_/s.png)*. --- Rebuttal Comment 1.1: Comment: Thank you for your hard work and detailed explanations in addressing my concerns. Most of my concerns have been resolved, and I have reconsidered my score accordingly. However, for further reconsideration, I would appreciate clarification on the following point: Interpretability: > High or low accuracy in specific regions can be attributed to the quality of segmentation. I am not entirely clear on this explanation. Could you elaborate further? Does this imply that poor performance could be improved by adjusting the grid point locations used for interpolation? If so, a discussion on how to address this issue would be valuable, particularly for real-world applications. --- Reply to Comment 1.1.1: Comment: Thanks for your question! Yes, it means we can improve the accuracy by adjusting the location of grid points in INN-TD. More specifically, we can use a relatively fine mesh for the solution regions with a higher gradient and a coarse mesh for other regions. This idea is originated from finite element analysis and is very useful for many physical problems such as stress concentration in modeling indentation, plasticity, etc. Like shown in the newly added *[Fig 1](https://anonymous.4open.science/r/i7_/i0.png)*, INN-TD can be written as a pruned MLP. The first 2 hidden layers represent linear finite element basis functions and the weights and biases are functions of the grid positions. As a result, if we include the weights and biases in the first layers in the learnable parameters, the training process of INN-TD includes the optimization of the grid positions. This is the spirit of r-adaptivity originally from the finite element method and will result in increased accuracy. We also want to mention that the INN-TD can also adjust $s$ and $p$ to control the accuracy as we have shown large $s$ and $p$ will lead to higher order smoothness and therefore higher convergence rates. This can be done by modifying the 3rd hidden layer in *[Fig 1](https://anonymous.4open.science/r/i7_/i0.png)*.
Summary: This paper proposes the Interpolation Neural Network-Tensor Decomposition (INN-TD), which relies on learnable locally supported interpolation functions for finite element methods and functional tensor decomposition for approximating high dimensional multivariate functions. The authors show that INN-TD outperforms prior methods for data-driven and data-free modeling, at a fraction of the computational cost. ########## Update after rebuttal: I thank the authors for addressing my questions and concerns in detail. I have updated my score accordingly. Claims And Evidence: The authors’ main claims are that the INN-TD is faster, more scalable, and more interpretable than existing methods. They also claim that it leads to better performance and reduced memory footprint. Most of these claims are addressed in the results. The tables in section 4 show that INN-TD achieves lower error rates and is in general faster and more scalable to higher dimensional problems than alternatives. However, the authors do not show the GPU memory footprint in these tables, and I do not see any empirical demonstration of the claimed interpretability of INN-TD. Given that the authors describe C-HiDeNN as “fully interpretable” in Table 5, an experimental demonstration of this would be quite useful. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are mostly aligned with the application at hand. Given that the authors claim that INN-TD is applicable to large-scale systems, it would be interesting to also see a comparison of INN-TD to a system of greater real-world interest. Theoretical Claims: Given that C-HiDeNN is not necessarily well known by the entire community, I think it would be useful for the authors to include a more detailed description (maybe the results but not the proofs) of the properties of C-HiDeNN: locality, Kronecker delta property, partition of unity, etc. Experimental Designs Or Analyses: See note above in “Methods and Evaluation Criteria.” Also, it would be interesting to compare INN-TD with other methods in the inverse optimization task. Supplementary Material: I appreciate that the authors included their code in the supplementary material for reproducibility. Relation To Broader Scientific Literature: This paper presents INN-TD, a method that appears more principled and grounded than prior black-box models for solving PDEs using neural networks. It is largely based on C-HiDeNN, but it also incorporates tensor decomposition for better scalability to high dimensional systems. The authors compare with prior works that have been applied to these tasks, notably KANs and PINNs, which is quite useful. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: Figure 2 is not very clear: the legend and axis labels are small, and the caption is quite vague. Questions For Authors: 1. For data-driven training, is the goal to solve one instance of a PDE (if the data pairs are $(x_i, y_i)$)? Is it also possible to pass in PDE parameters as inputs? Did the authors try this setup as well? 2. Why are the norms (such as the one in equation 13) often finite-dimensional when dealing with the infinite-dimensional $u^*$ function? 3. What are $A^e$ and $A^i_s$? I’m having a bit of trouble following the notation in Appendix A. 4. What does “Fully interpretable” in Table 5 mean? I suggest the authors elaborate. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly appreciate your kind comments, and they are very helpful to improve the manuscript. > GPU memory footprint in the tables is missing. We have added the GPU memory and computational costs for all cases solved by INN-TD, as shown in *[Table 1](https://anonymous.4open.science/r/i7_/t1.png)* and *[Table 2](https://anonymous.4open.science/r/i7_/t2.png)*. Since INN-TD leverages locally supported basis functions with tensor decomposition (TD), and we use the Galerkin method to solve the weak form of the PDE, the resulting systems of equations have a sparse structure and require minimal memory footprints. This is different from the collocation method used by many other machine learning PDE solvers. > Meaning of interpretability in Table 5 and experimental demonstration The interpretability of INN-TD is reflected in four aspects: 1. As shown in the new *[Fig 1](https://anonymous.4open.science/r/i7_/i0.png)*, INN-TD can be interpreted as a pruned MLP where the first 2 hidden layers represent linear finite element basis functions. As a result, we can easily control the mesh size of locally supported basis functions such that a fine mesh will be used for solutions with high nonlinearity. 2. C-HiDeNN is constructed by adding the 3rd hidden layer with hyperparameters $s$ and $p$. Larger $s$ and higher $p$ lead to better accuracy and faster convergence. 3. Boundary conditions are easily satisfied due to the Kronecker delta property. 4. Instead of using collocation methods where convergence cannot be easily guaranteed, we use the Galerkin formulation, and the numerical results certify the convergence rate for different C-HiDeNN parameter $p$ (see *[Fig 2](https://anonymous.4open.science/r/i7_/p1.png)* and *[Fig 3](https://anonymous.4open.science/r/i7_/p2.png)*). For experimental demonstration, we have added a new numerical example. In this example, we solve a 2D Poisson equation where the main nonlinearity only exists in a small portion of the whole domain as shown in *[Fig 4](https://anonymous.4open.science/r/i7_/i1.png)*. We leverage the interpretability of INN-TD by only refining the grid close to the nonlinear solution area and using larger $s$ and $p$ to increase the smoothness of the approximation. As a result, the point-wise absolute error is $\mathcal{O}(10^{-6})$. As a comparison, it's not straightforward to design an optimal MLP to reasonably approximate the solution to this problem. > Solve real-life large-scale systems We highly agree with you that INN-TD should be used to solve real-world large-scale problems to demonstrate its advantages over standard numerical methods. We have leveraged INN-TD to solve a large-scale additive manufacturing problem, which will be added to the appendix. The main difficulty of this problem is the inherent mismatch of length scales and high precision requirement, which will make standard methods suffer from expensive computational costs. Details of the problem setup and results can be found in *[Fig 5](https://anonymous.4open.science/r/i7_/AM.png)* and (J. Guo, et. al. arXiv:2503.13933). > Details of properties of C-HiDeNN are missing We have added a section in the appendix to explain the detailed properties of C-HiDeNN, as summarized in *[Fig 6](https://anonymous.4open.science/r/i7_/c.png)*. > Compare INN-TD with other methods for inverse problems. In INN-TD, we first solve the parametric PDE $u(\mathbf{x_s}, x_t, \mathbf{x}_p)$ by considering parametric variables such PDE coefficients as inputs. Then this parametric solution is used as a fast prediction model to minimize the difference between the measurements and predictions. PINN solves the inverse parametric problem by directly adding an additional data loss to the loss function. > Figure 2 is not clear Thanks for pointing this out. We have modified Figure 2 as attached in *[Fig 7](https://anonymous.4open.science/r/i7_/f2.png)* > Pass in PDE parameters as inputs during training For data-driven training tasks, INN-TD can be used for either approximating the solution for constant PDE coefficients or treating PDE coefficients as extra inputs $\mathbf{x}_p$. In the training example shown in Sec 4.1.1, the input vector is $(x,y,k,P,t)$, which covers location, PDE coefficients, and time. > Definition of norms We used relative L2 norm error in a discrete sense since we want to have a fair comparison against collocation-based neural solvers, of which the error is computed on discrete points. But we can also easily compute the norms as integrals to examine the error. In Appendix D2.2 Eq. 44, we defined the error norm as the integral. > Meaning of $A^e$ and $A^i_s$ We have added a more detailed new *[figure](https://anonymous.4open.science/r/i7_/c0.png)* to explain the notations in C-HiDeNN basis function. $A^e$ refers to nodes within element $e$; $A^i_s$ is patch domain at node $i$ with patch size $s$. We will upload the updated manuscript after the discussion period. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my questions and concerns in detail. I have updated my score accordingly. As a side note, there may be a typo in the new figure c0.png: I assume the caption should read "(a) _convolution_ patch"? --- Reply to Comment 1.1.1: Comment: Thanks for pointing this out! We have fixed the typo and updated the *[figure](https://anonymous.4open.science/r/i7_/c0.png)*.
Summary: This paper introduces a new framework, Interpolation Neural Network-Tensor Decomposition (INN-TD), designed to efficiently and interpretably solve high-dimensional partial differential equations (PDEs) encountered in large-scale physics-based problems. The key innovation lies in integrating the local interpolation techniques of finite element methods with deep neural network architectures. Additionally, by leveraging tensor decomposition, the method represents a multivariate function as a product of univariate functions, thereby reducing multidimensional integration to a series of one-dimensional integrations. Experimental evaluations in data-driven training, data-free solving, and inverse optimization tasks demonstrate that INN-TD outperforms standard models like vanilla MLP, SIREN, KAN, and CP-PINN in terms of both accuracy and computational efficiency. Claims And Evidence: The authors make two claims in the paper. The first claim is that the INN-TD method is more scalable. This is evident because it converts the original high-dimensional finite element interpolation problem into multiple one-dimensional function fitting problems. The second claim is that INN-TD offers better interpretability. However, I find it hard to understand why INN-TD would be more interpretable compared to other PDE solvers like PINNs. In the paper, the authors only state, “C-HiDeNN is interpretable as it uses locally supported basis functions and the number of elements in C-HiDeNN is closely related to the resolution,” to justify the enhanced interpretability of INN-TD. This explanation is clearly insufficient and does not directly address the issue of interpretability. Methods And Evaluation Criteria: This paper employs a variety of experiments to evaluate the performance of INN-TD, including data-free tasks, general space-parameter-time PDEs, and inverse optimization tasks. The evaluation results on these datasets provide strong support for the proposed method. Theoretical Claims: This paper does not provide any theoretical analysis or supporting proofs. Experimental Designs Or Analyses: Since the main claims of this paper focus on scalability and interpretability, I suggest that running time statistics be provided for each experiment. Supplementary Material: I have reviewed the supplementary materials and carefully read all the experimental details in the paper. Relation To Broader Scientific Literature: This paper falls under the category of AI for Science, or more specifically, Tensor Networks (TN) for Science, and therefore it has extensive connections with the literature on tensor networks, machine learning, and PINNs. Essential References Not Discussed: Using tensor decomposition (not limited to CP) for solving high-dimensional PDEs is a very interesting research topic, there are some research works that have already proposed to solve the high-dimensional PDEs in different aspects. I strongly recommend that the authors expand their review in the related works section to properly acknowledge and respect prior contributions in this area. Other Strengths And Weaknesses: **Strengths** (1) The paper employs the idea of multivariate function decomposition by decomposing a multivariate function into a tensor product of several univariate functions, which offers a clever and efficient way to solve high-dimensional physical problems. (2) The authors utilize FEA basis functions as the basis in the tensor function decomposition, enabling straightforward incorporation of constraints such as boundary conditions when solving PDEs. **Weaknesses** (1) The paper contains numerous notational errors, including in equations (2), (5), (8), (10), (13), and (18). (2) The use of tensor decomposition (not limited to CP) for solving high-dimensional PDEs has been widely studied in recent years. The authors should expand their review in the related works section to thoroughly cover this topic and properly acknowledge previous contributions. Other Comments Or Suggestions: None. Questions For Authors: **Questions** (1) In equation (7), do the basis function parameters s, a, and p still exist? If they are omitted, please provide additional clarification. (2) Some physical systems may be low-dimensional, so why is the resulting multivariate function of low tensor rank? (3) Using tensor decomposition (not limited to CP) for solving high-dimensional PDEs is not novel; I strongly suggest that the authors expand the related works section on this topic to properly acknowledge prior work. (4) Why not solve for all the parameters at once, or why not train continuously in one phase instead of using a staged training approach? What benefits does the two-phase training method bring? I am curious about the motivation behind this training scheme and how the results would compare if only the first phase or only the second phase were used. (5) How are the FEA parameters set? Since s, p, and a significantly impact performance, this is critical; I recommend that the experimental section provides detailed information on their settings. (6) Does the GPU wall time include both training and inference, or is it solely the training time? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the constructive suggestions, which can greatly improve the current manuscript. > Explanation of the interpretability As stated by *Doshi-Velez et al. (arXiv:1702.08608)*, model interpretability refers to the ability to explain a model’s behavior in a way that is understandable to humans. INN-TD is an interpretable machine learning algorithm designed for scientific and engineering problems, offering transparency in its structure, clear convergence guarantees, and explainable insight into how data or parameter changes affect outcomes. (1) INN-TD’s learning parameters are interpretable nodal values of TD components, in contrast to the opaque weights and biases of neural networks. As shown in the newly added *[Fig 1](https://anonymous.4open.science/r/i7_/i0.png)*, INN-TD can be interpreted as a specially pruned MLP where the first 2 hidden layers represent linear finite element basis functions. (2) the algorithm has a well-defined convergence theory: as the number of segments (bins) and modes increases, the approximation error decreases. We added [Fig. 2](https://anonymous.4open.science/r/i7_/p1.png) to show its convergence. We can see that increasing the number of model parameters leads to a predictable error reduction at a rate of $p+1$. (3) INN-TD’s results are explainable—high or low accuracy in specific regions can be attributed to the quality of segmentation. A newly added [Fig. 3](https://anonymous.4open.science/r/i7_/i1.png) shows how we improve accuracy by refining the grid near the nonlinear region for the Poisson equation. > Theoretical claims It has been proved in *(Lu, Y, et. al Computational Mechanics 72, no. 2 (2023): 333-362.)* that the convergence rate of C-HiDeNN for solving PDE in terms of L2 norm error is $p+1$. It has also been proved that the INN-TD is a universal approximator and the following error bounds hold ($E$ refers to the energy norm): $\|u^{\text{C-HiDeNN}}-u^{\text{exact}}\|_E \leq \|u^{\text{INN-TD}}-u^{\text{exact}}\|_E\leq \|u^{\text{FEM}}-u^{\text{exact}}\|_E $ We have added this reference in the revision. > Running time statistics We have rerun all the numerical examples. The statistics are shown in *[Table 1](https://anonymous.4open.science/r/i7_/t1.png)* and *[Table 2](https://anonymous.4open.science/r/i7_/t2.png)*. > Typo in equations Thanks for pointing these out. We have carefully checked and revised all the equations. We will upload the revised manuscript after the discussion period. > Q1: missing hyperparameters in Eq 7. The parameters $a,s,p$ still exist. We omitted them for simplicity of notation. We have added this clarification. > Q2: low dimensionality of solution When solving complex physical problems, standard numerical methods such as the finite element method (FEM) will result in a huge number of unknowns. In many cases, however, the problem solution lives in a subspace of a much smaller dimension. For example, the FEM solution to a 2D physical problem can be represented by a 2D matrix. Using PCA, this full matrix can be approximated using a few principal components in many cases. INN-TD can be treated as the generalization of PCA to general $N$-dimensional problems where the discretized solution can be deemed as a $N$-th order tensor. The decomposed TD components can then be approximated using the C-HiDeNN basis function as a locally supported basis function. > Q3: missing refs Thanks for the suggestion! We have done a thorough literature review and added the reference in the manuscript as shown in *[Table 3](https://anonymous.4open.science/r/i7_/ref.png)*. > Q4: training/solving schemes As listed in Algorithm 1-2, we have both boosting (multi-stage) and all-at-once (single-stage) algorithms for the training task. Boosting is flexible since one doesn't need to predefine the number of modes. However, given the same number of modes, the all-at-once algorithm has higher accuracy. The same rule applies to solving tasks. Although we only used the boosting algorithm in the current manuscript for the solving task, the all-at-once solving scheme is also available and is explained in *(J. Guo, et. al. arXiv:2503.13933)*. > Q5: choosing $s,p,a$ We have thoroughly investigated the influence of hyperparameters in the new solving experiments as shown in *[Fig 3](https://anonymous.4open.science/r/i7_/p1.png)* and *[Fig 4](https://anonymous.4open.science/r/i7_/p2.png)*. As expected, larger $s$ and $p$ will lead to better accuracy. We also found that when $a>20$, the results remain the same. Furthermore, the interpretability of INN-TD allows us to choose $s,p$ for different domains thanks to the locally supported nature of the approximation as shown in the example of *[Fig 2](https://anonymous.4open.science/r/i7_/i1.png)*. > Q6: details of GPU wall time The GPU wall time refers to the training/solving time. Once trained/solved, inference of INN-TD is very fast (~ ms). --- Rebuttal Comment 1.1: Comment: I appreciate the authors' rebuttal. The rebuttal and additional materials have solved my concerns. I will increase my score to 3.
null
null
null
null
null
null
null
null
Continuous Semi-Implicit Models
Accept (poster)
Summary: The paper studies semi-implicit models, and proposes an extension from hierarchical semi-implicit models (hSIM) to continuous-time SIMs (cSIM). The continuous-time SIM has the advantage of being simulation-free, as opposite to hierarchical SIM; and enables multi-step sampling at the same time. This is achieved thanks to a formulation inspired from consistency models. Based on this framework, the authors propose several adaptations to the hSIM framework, e.g. a shifted regularization term. The authors analyse some theoretical properties of their method, particularly the benefits of multi-step sampling. Finally, they demonstrate high performance on Cifar10, ImageNet 64x64, and ImageNet 512x512. ## update I decide to not change my rating for the unclear link between the proposed method and consistency models: The authors claim in the rebuttal that "our method surpasses ECT by a large margin on both FID and FD-DINOv2, details shown in Table 1 and Table 2 in paper.". However, this is not true when looking at fixed NFE. It surpasses on FD-DINOv2, but not in FID. The claim that the proposed method allows for a faster training than consistency models would require more empirical evidence, by comparing both methods in a similar setting, with same number of iterations/compute budget. I am unsure that we can draw such conclusions. Claims And Evidence: The claims made in the paper are supported by empirical and theoretical evidence. Theorems prove 1) the soundness of the shifted regularization term and of the training objective 2) the benefits of multi-step sampling. Empirical evidence show that the proposed method reaches SOTA-level performance on standard image benchmarks. Methods And Evaluation Criteria: Method: The method is sound. The core idea is to use a consistency formulation in the setting of semi-implicit models. This resonates with ideas validated in recent works. Evaluation criteria: The method is tested on standard image datasets (CIFAR-10, ImageNet) and with standard metrics (FID, FD-DINOv2). Theoretical Claims: I did not verify in detail the theoretical proofs. Experimental Designs Or Analyses: I checked the experimental design and did detect any issue. The authors use standard benchmarks in the generative modelling literature (FID/FD on CIFAR-10 and ImageNet-64). An area of improvement would be to use more fine-grained evaluation metric, for example metrics that detect memorization and over-fitting in generative models (Stein et al. “Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models”, NeurIPS 2023). Supplementary Material: I have read the supplementary material, although I did not read all proofs in detail. Relation To Broader Scientific Literature: This paper is studying the setting of semi-implicit models. It consists in applying an idea from consistency models (Song et al., 2023) to this setting. This allows to go from hierarchical semi-implicit models to continuous-time and simulation-free semi-implicit models. However, continuous-time SIM are very close to consistency models. Most notably, the sampling algorithm is the same. However, the training differs, since continuous-SIM are based on a Fisher divergence minimization scheme. Essential References Not Discussed: Not in my knowledge. Other Strengths And Weaknesses: Weakness 1: As already explained in Relation to Broader Literature, the extension proposed in this framework relies on the use of a "consistency" formulation, which is largely adopted in the generative modelling community. This limits the technical novelty of the paper. Weakness 2: To me, this paper asks the question: what is the best way to train a consistency model? Indeed, since your sampling is basically the same as the one of a standard consistency model, the main difference is in the training procedure. Do you have an opinion on why it would be better to rely on a Fisher divergence minimization rather than on a consistency loss? This is not clear since coSIM is not superior to ECT. There is no theoretical evidence, nor empirical evidence, that using a your training procedure leads to a better model. Overall, the contribution is not clear to me. The proposed method seems to improve semi-implicit models with the consistent formulation. But are semi-implicit models needed for high-quality few-step generation? Why would this framework be more interesting than standard consistency models? This discussion is currently lacking in the paper. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! Below are our responses to your concerns. `Q1`: I checked the experimental design ... Stein et al. “Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models”, NeurIPS 2023. `A1`: Exactly! The paper “Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models” from NeurIPS 2023 by Stein et al, directly proposed the metric FD-DINOv2. They pointed out that FID as a metric is unfairly favoring GAN-style models, and has limited numeric accuracy to handle modern advanced diffusion models, as they usually reach a very small FID value. This inspires us to use FD-DINOv2 in our experiments. `Q2`: As already explained in Relation to Broader Literature ... limits the technical novelty of the paper. `A2`: Here we first clarify the novelty of our method. CoSIM is derived from the framework of continuous semi-implicit variational inference. Methods such as HSIVI [1] and SiD [2] are built upon this semi-implicit training objective to accelerate diffusion models. Theoretically, We introduce a regularization framework of the SiD loss to stabilize the training and address the pathological behavior under large alpha values as demonstrated through theorem 3.1 and the ablation study on the regularization term coefficient $\mathrm{coef} = (1+\lambda) \alpha$. Table1: Class-conditional ImageNet (512x512) for CoSIM-L given **$\alpha=2.0$**. |$\mathrm{coef}$\Iteration | 204k| 1024k | 2048k | 4096k| |----|----|----|----| ----| |$\mathrm{coef}=0.5$ with NFE=$2$| 374.8 | 3960.0 | 4129.3 | 4039.9 | |$\mathrm{coef}=2.5$ with NFE=$2$| 619.0 | 114.1 | 78.7 | 64.9 | Furthermore, by using a specific parametrization, we derive a novel training method for consistency models at a distributional level. Unlike traditional consistency model fixed on the reversed ODE path, CoSIM learns the consistency map directly, without being constrained by intermediate process. Through our experiments, we show this achieves superior performance compared to the current state-of-the-art consistency models on ImageNet $512 \times 512$, using **much less training budget** (ours 20M compared to sCD [3] 819.2M). Table3: Sample quality measured by FID on class-conditional ImageNet (512x512). |Model|#params|\NFE|FID| |-|-|-|-| |sCD-S| 280M | 2 |2.50| |sCD-M| 498M | 2 |2.26| |sCD-L| 778M | 2 |2.04| |CoSIM-S| 280M| 2 | 2.66| |CoSIM-S| 280M| 4 | 2.56| |CoSIM-M| 498M|2| **1.95**| |CoSIM-M| 498M|4|1.93| |CoSIM-L| 778M|2|1.84| |CoSIM-L| 778M|4| **1.83**| [1] Zhou, et. al., "Score identity distillation: Exponentially fast distillation of pretrained diffusion models for one-step generation", ICML 2024. [2] Yu, et. al., Hierarchical Semi-Implicit Variational Inference with Application to Diffusion Model Acceleration, NeurIPS 2023 [3] Lu, et al., "Simplifying, stabilizing and scaling continuous-time consistency models.", OpenAI `Q3`: Do you have an opinion ... Fisher divergence ... a better model. `A3`: As a novel training scheme derived from continuous semi-implicit variational inference, CoSIM differs fundamentally from traditional consistency models in the following aspects, offering distinct training advantages: - Fisher divergence does not require strictly following the reverse ODE trajectory but only learns a consistency mapping on a distributional level. This relaxes the choice on distillation path and enables various sampling strategies. It also reduces the training budget as the mapping can be more flexible. Conversely, ECT strictly follows the ODE trajectory, and its training budget is much higher than ours on comparable performance level. - Experimentally, our method surpasses ECT by a large margin on both FID and FD-DINOv2, details shown in Table 1 and Table 2 in paper. - Our approach primarily focuses on few-step sampling, which is also different from SiD, diff-instruct. We show that our approach shows significantly better performance using much less total training budget in our experiments, and scales to larger models better. `Q4`: Are semi-implicit models ... lacking in the paper. `A4`: To clarify, our contributions over semi-implicit models are: - We extend the framework of hierarchical semi-implicit variational inference models to the continuous settings. - We introduce a regularization term and give theoretical analysis on why this term could further stabilize the training. This addresses the pathological behavior of SiD loss when $\alpha$ is large. - Our formulation, under a specific parameterization, reforms to a consistency training on a distributional level. This enables a more flexible mapping rather than traditional consistency training on a fixed reverse ODE trajectory. This reduces training difficulty and reduces training budget, already validated by our experiments. - Please refer to A2 to see that our method can surpass the performance of latest consistency models with much lower training budget.
Summary: CoSIM is a continuous extension of hierarchical semi-implicit models, designed to enhance expressiveness and accelerate diffusion models with pretrained score networks. Unlike traditional hierarchical semi-implicit models, which suffer from slow convergence due to sequential training, CoSIM introduces a continuous transition kernel, enabling efficient, simulation-free training. Claims And Evidence: The authors claim that HSIVI suffers from slow convergence, while CoSIM's continuous framework improves training efficiency. It is recommended to provide empirical evidence supporting this claim, such as FID/FD vs. training iterations or runtime comparisons. Methods And Evaluation Criteria: The paper employs standard evaluation metrics, such as FID and FD-DINOv2, commonly used in image generation. Theoretical Claims: The theorems and proofs appear sound. Experimental Designs Or Analyses: The overall validity of the experimental design appears valid and reasonable. Supplementary Material: I reviewed the supplementary materials but only skimmed through the proofs. Relation To Broader Scientific Literature: Extending the discrete-time framework (in HSIVI) to a continuous-time framework is natural, especially within the diffusion model community. However, the significant performance gain over HSIVI-SM suggests that this extension is both meaningful and interesting. Essential References Not Discussed: The authors discussed essential references. Other Strengths And Weaknesses: - Is the training stable given that multiple networks are trained simultaneously? - What is the numerical performance with a one-step generation? - How does the parameter $\lambda$ affect performance? Is there an ablation study? - How does CoSIM integrate with flexible (un-)conditional generation techniques, such as classifier-free guidance? - How sensitive is the numerical performance to the choice of time distribution $\pi(\gamma | r_s)$? The same question applies to the weighting functions $w_i(s)$. Other Comments Or Suggestions: - Some notations are not defined or explained well: 1. What is $q_t(x_t)$ in Lines 154–155, in the right-hand column? 2. What is $L_{SIVI-f}$? Is it a symmetric divergence: it seems like $L_{SIVI-f}(p_\text{parametrized}||p_\text{target})$ and $L_{\text{SIVI}-f}(p_\text{parametrized}||p_\text{target})$ are not consistent (Eq. (4), Eq. (9), and [1])? 3. What is $p_s(x_s)$ in Eq. (9)? [1] Yu, et. al., Hierarchical Semi-Implicit Variational Inference with Application to Diffusion Model Acceleration ------ **After rebuttal:** Although the extension to continuous time is relatively straightforward, I appreciate that the authors have included supplementary experiments to address my concerns regarding the empirical aspects. Thus, I raise my rating. Questions For Authors: Please see the comments above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! We will address your concerns below. `Q1`: It is recommended to provide empirical evidence supporting this claim, such as FID/FD vs. training iterations or runtime comparisons. `A1`: We give the details of our training iterations in Table 4 in Appendix C. For comparison with HSIVI, we give the following two tables. Table1: Training Efficiency and Performance on Unconditioanl CIFAR10 |Method | \# of total training images| FID | |-|-|-| |HSIVI| 27 M | 4.17 (NFE=15)| |CoSIM| 20M | 2.55 (NFE=4) | |CoSIM| 200M | 1.97 (NFE=4) | Table2: Comparison of Training Efficiency and Performance on ImageNet $64\times 64$ |Method | \# of total training images| FID | |-|-|-| |HSIVI| 27 M | 15.49 (NFE=15)| |CoSIM| 20M | 2.51 (NFE=4)| |CoSIM| 200M | 1.46 (NFE=4)| `Q2`: Is the training stable given that multiple networks are trained simultaneously? How does the parameter lambda affect performance? Is there an ablation study? `A2`: Yes, all networks are trained simultaneously and stably, which has also been observed in previous works such as SiD[1] and Diff-Instruct[2]. We provide an ablation on $\lambda$ in the form of coefficient $\mathrm{coef} := \alpha(1+\lambda)$ below. Table 3: Sample quality measured by FD-dinov2 on class-conditional ImageNet (512x512) for CoSIM-L given $\alpha=1.2$ and NFE=$2$. |$\mathrm{coef}$ \\# of total training images | 204k| 1024k | 2048k | 4096k| |---|----|----|----| ----| |$\mathrm{coef}=0.5$ | 309.7 | 101.3 | 69.6 | 61.5 | |$\mathrm{coef}=1.0$ | 392.8 | 92.8 | 61.7 | 50.5 | |$\mathrm{coef}=1.5$ | 421.9 | 95.2 | 58.6 | **49.2** | The results show that introducing regularization (i.e., increasing $\mathrm{coef}$) within a suitable range facilitates the learning of $f_\psi$, which in turn improves the training of $q_\phi$. Furthermore, since the reduction on FD-DINOv2 continues as training progresses, CoSIM demonstrates stable training across a relatively broader range of regularization coefficients, such as 1.0 and 1.5. [1] Zhou, et. al., "Score identity distillation: Exponentially fast distillation of pretrained diffusion models for one-step generation", ICML 2024 [2] Luo, et. al., "Diff-instruct: A universal approach for transferring knowledge from pre-trained diffusion models", NeurIPS 2023 `Q3`: The numerical performance with a one-step generation. `A3`: CoSIM is inherantly a multi-step model, trained to model the continuous transition kernel rather than a one-step deterministic generation model like GANs. One significant advantage of multi-step generation is the ability to inject more randomness in each step, enhancing sampling diversity. For ablation, we also evaluated one-step generation quality of CoSIM on ImageNet $512 \times 512$ for M model. Table4: Sample quality on conditional ImageNet 512x512. |Model | \# params|NFE|FD-dinov2| FID | |-|-|-|-|-| |EDM2-M-dino| 498M |63 |58.44| - | |Moment Matching|400M |1| - |$3.3^*$ | |CoSIM-M| 498M| 1 | 74.23| 3.52| Note that FID result of Moment Matching[1] is evaluated at a lower resolution 128x128. Consider the difficulty of generating in higher resolution, our method is comparable with Moment Matching[1] on one-step generation. [1]Salimans, et. al., "Multistep distillation of diffusion models via moment matching." NeurIPS 2024. `Q4`: How does CoSIM integrate ... such as classifier-free guidance? `A4`: If an unconditional model $G_\phi(x_t; t,\emptyset)$ is used to modify the generation, the parameterization can be considered to integrate it into CoSIM's training. $$ G_\phi^{(w)}(x_t; t,c) = wG_\phi(x_t; t,c) + (1-w)G_\phi(x_t; t,\emptyset). $$ We also noted that SiD-LSG [1] proposed a novel guidance method by using CFG in training and inferencing with different weights $w$. This method also adapts to our formulation and provides inspiration for future work. [1] Zhou, et. al., "Long and short guidance in score identity distillation for one-step text-to-image generation." `Q5`: The numerical performance to the choice of time distribution? ... the weighting functions. `A5`: We give another ablation study on the hyperparameter $R$ in the time distribution $\pi(\gamma|r_s)$. Table5: Ablation on $R$ for CoSIM-M on class-conditional ImageNet ($512 \times 512$) with NFE=4 by FD-DINOv2 |$R$ \\# of total training images | 2M | 4M| 38M | |-|-|-|-| |$R=4$| 220.2 | 73.6 | 57.8 | |$R=6$| 110.4 | 55.6 | 54.7| |$R=8$| 95.6 | 54.7 | 55.8| Different $R$ gives slightly different convergence rate during early stages of training. But after 38M training images, this difference is minimized. For the weighting functions, we adopt the same weighting function as in EDM[1], this setting has been widely adopted in many works such as SiD and irrelevant to our algorithm design. Hence we keep it fixed. [1] Karras, et. al., "Elucidating the Design Space of Diffusion-Based Generative Models", NeurIPS 2022 `Q6`: Typos `A6`: Thank you for catching these typos! We will correct them in our revision.
Summary: The paper proposes Continuous Semi-Implicit Models (CoSIM), extending hierarchical semi-implicit models into a continuous-time framework for diffusion model acceleration. CoSIM introduces a continuous transition kernel that allows the simulation-free training. It uses semi-implicit variational inference (SIVI) as training criteria, providing a distributional-level consistency model without relying on moment-level or sample-level reverse processes. The method yields comparable or superior results on FID and improves existing approaches on the FD-DINOv2 metric on CIFAR-10 and ImageNet datasets. The authors further provide theoretical insights into their approach via error analysis. Claims And Evidence: The paper makes the following claims: - Faster Convergence: The authors show that CoSIM reduced the number of iterations required for distillation. - Improved Generation Quality: CoSIM achieves comparable or better generation quality. Supported by Comparable FIDs and better FD-DINOv2. - Continuous-time Framework: The authors extend the previous SIVI to a continuous framework. Methods And Evaluation Criteria: - This paper evaluates CoSIM using widely accepted metrics (FID, FD-DINOv2) and datasets (CIFAR-10, ImageNet) for generative modeling. - The FD-DINOv2 metric specifically helps evaluate perceptual alignment with human vision, a useful complement to standard FID metrics. The chosen evaluation makes sense given the objective of achieving accelerated image generation. Theoretical Claims: The paper provides several theoretical claims: - Derivation of the continuous transition kernel. - Proofs of unbiasedness and error bounds for two-stage optimization (Theorem 3.1). - Error analysis (Propositions 3.3 and 3.4), providing theoretical guarantee about the quality of multi-step sampling. Experimental Designs Or Analyses: - Standard benchmarks on CIFAR-10 and ImageNet datasets for unconditional and conditional generation tasks, respectively. - Comparison against strong, established baselines such as SiD, iCT, ECT, CTM, and Moment Matching methods. - Reporting quantitative (FID, FD-DINOv2) and qualitative evaluations (sample images). Experiments look valid and sound. This paper also outlines experiment setups and hyperparameters. Supplementary Material: The supplementary material was reviewed briefly, including: - Proofs and additional theoretical details. - Experiment details. - Additional visual examples. The supplementary material helps support the main claims. Relation To Broader Scientific Literature: - Builds upon recent advances in diffusion model acceleration and semi-implicit variational inference (SIVI). - Identifies the advantages of continuous over discrete-time frameworks in hierarchical generative modeling. - Compares itself to recent notable works (e.g., SiD, HSIVI), effectively highlighting contributions and improvements. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: I personally like this paper. It extends HSIVI to continuous-time framework and provides a solid theorectical ground for prior works like SiD and validates the regularization techniques. However, I find the paper's emphasis on its connection to consistency models overstated. The current approach seems more naturally framed as an extension of HSIVI to continuous-time settings or as a multi-step generalization of SiD. The relation to consistency models feels unnecessary and shouldn't be demonstrated as the paper's contributions. Questions For Authors: - Regularization Strength: How sensitive is CoSIM to the choice of the regularization hyperparameter? - Computational Overhead: Could you clarify or quantify the computational overhead introduced compared to simpler baseline distillation methods, such as SiD and Diff Instruct? Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for providing valuable feedback! Here are our responses. `Q1`: I personally like this paper. It extends HSIVI ... paper's contributions. `A1`: Thank you for your interest in our work! We acknowledge that CoSIM can be regarded as a continuous extension of HSIVI, but our contributions go beyond that and is different from traditional consistency models in the following aspects: - Unlike traditional ODE trajectory distillation applied by normal consistency distillation methods, we train the consistency model at a distributional level using Fisher divergence. This approach eliminates the need to constrain the generator $G_\phi$ to sample along a reversed ODE path predefined by the score function, allowing the model to learn a more flexible consistency mapping on a distributional level. - Experimentally, we find that this distinction enables CoSIM to train much faster outperform the state-of-the-art consistency model sCD [1], using only 20M total training images (compared to the 819.2M by sCD). We provide extended results of CoSIM compared with sCD on FID, demonstrating its superior performance across various model sizes. Table1: Sample quality measured by FID on class-conditional ImageNet (512x512). |Model|#params|\NFE|FID| |----|----|----|----| |EDM2-S-FID| 280M| 63 | 2.56| |EDM2-M-FID| 498M| 63 | 2.25| |EDM2-L-FID| 778M| 63 | 1.96| |sCD-S| 280M | 2 |2.50| |sCD-M| 498M | 2 |2.26| |sCD-L| 778M | 2 |2.04| |CoSIM-S| 280M| 2 | 2.66| |CoSIM-S| 280M| 4 | 2.56| |CoSIM-M| 498M| 2 | **1.95**| |CoSIM-M| 498M| 4 | 1.93| |CoSIM-L| 778M| 2 | 1.84| |CoSIM-L| 778M| 4 | **1.83**| `Q2`: How sensitive is CoSIM to the choice of the regularization hyperparameter? `A2`: We conducted an ablation study on $\lambda$ with the coefficient $\mathrm{coef} := \alpha(1+\lambda)$ in (15) for conditional ImageNet $512 \times 512$ generation of CoSIM using the $L$ model size, with fixed $\alpha=1.2$. The results demonstrate that introducing regularization (i.e., increasing $\mathrm{coef}$) significantly enhances the learning process of $f_\psi$ in a wide range. Table2: Sample quality measured by FD-dinov2 on class-conditional ImageNet (512x512) for CoSIM-L given $\alpha=1.2$. |$\mathrm{coef}$\ \# of total training images | 204k| 1024k | 2048k | 4096k| |-|-|-|-|-| |$\mathrm{coef}=0.5$ with NFE=$2$| 309.7 | 101.3 | 69.6 | 61.5 | |$\mathrm{coef}=1.0$ with NFE=$2$| 392.8 | 92.8 | 61.7 | 50.5 | |$\mathrm{coef}=1.5$ with NFE=$2$| 421.9 | 95.2 | 58.6 | 49.2 | `Q3`: Computational Overhead: Could you clarify or quantify the computational overhead ... SiD and Diff Instruct? `A3`: There are three networks in the training process, generator $G_\phi$, teacher $S_{\theta^*}$ and auxiliary function $f_{\psi}$. During training, teacher $S_{\theta^*}$ is frozen while generator $G_\phi$ and auxiliary function $f_{\psi}$ are simultaneously evolving. For SiD and Diff Instruct, all three networks are kept the same as the baseline methods and initialized from the same checkpoint. For our method, the auxiliary function $f_{\psi}$ is modified to incorporate the extra time parameter $s$ introduced in Section 3. In practice, we duplicate the time embedding layer, which only adds a very small number of extra parameters. Specifically, on CIFAR, the total number of trainable parameters is 111,465,478 for SiD and Diff-Instruct. With our modification, it increases to 116,128,006, adding only 4% extra parameters. On ImageNet $64\times64$, the total number of trainable parameters is 59,1798,534 for SiD and Diff-Instruct. With our modification, it increases to 619,999,878, also adding only 4% extra parameters. Additionally we time the computational overhead for SiD, Diff-Instruct and our method on unconditional CIFAR generation. We conduct all experiments on a single L40S GPU with all fixed hyperparameters such as batch size, learning rate, optimizer etc.. Table 3: training time for 10496 images | Time \ Model| SiD | Diff-Instruct | CoSIM | |------|:-------:|:-------------:|:-------:| | Time | 2min28s | 2min42s | 3min08s | Compared to SiD and Diff-Instruct, our method requires 27% and 18% more training time respectively. Apart from the extra trainable parameters explained above, the regularization we introduced in Theorem 3.1 also contributes to the extra time. We argue such overhead is worthy, as the overall training objective stabilizes and the total training budget shrinks. On conditional imagenet $64\times64$ generation, our method only requires 200M training images to reach FID 1.46, surpassing SiD which needs 1000M training images to reach FID 1.52. On the same task, Diff-Instruct reaches FID 5.57 with 4.8M training images. Since we aim for a much lower FID, we did not frequently test our model in the early stages of training. The closest one we have is that our model reaches FID 4.15 with 5.12M training images. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response, including the additional experimental results, clarification on the method, and the ablation on the regularization strength. That said, I must **reiterate my concern** regarding the connection to consistency models. In my view, this connection is both unnecessary and overstated. CoSIM is much more naturally situated within the lineage of score distillation methods starting from SDS, such as SiD, VSD, and Diff-Instruct. Like these methods, CoSIM relies on a teacher model and two evolving networks to perform distribution-level matching. What CoSIM brings to this family is valuable: it extends HSIVI to modern generative models with strong performance, improves the stability of these approaches, and introduces a meaningful regularization mechanism. However, I see 0% alignment with consistency models. Framing CoSIM as a consistency model not only adds confusion but also detracts from the strength of the actual contribution. While I **initially leaned positively** toward this work, the authors' insistence on the framing has led me to **shift to a more neutral stance**. My recommendation for acceptance now depends on a substantial revision of the paper that repositions CoSIM clearly within the context of score distillation and SIVI, and removes the overclaims about consistency models. Without such a revision, I would not support acceptance. To be clear, my concern is **not with the method itself** or the theoretical foundation, but with **the framing and claims** made in the paper. I am very confident in this assessment. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to carefully review our work and for pointing this out. We acknowledge that we have not clearly distinguished CoSIM from consistency models. CoSIM utilizes a parameterization approach that shares some similarities with consistency distillation, leveraging the consistency map $G_\phi(x_t, t)$. However, its methodology is **fundamentally rooted in the framework of score distillation**, which is notably distinct from the ODE trajectory distillation used in consistency models. To address this, we will focus in Sections 3.1 and 3.2 on presenting our continuous HSIVI framework, situating it within the context of score distillation methods such as SiD. **In Section 3.3, we will revise the title** "A Path Towards Consistency" to "Parameterization of CoSIM." Additionally, we will **update the abstract and conclusion** by replacing the phrase "provides a novel method of consistency model training on a distributional level" with "we provide a novel method for multistep distillation of generative models training on a distributional level." We will also **revise the introduction** by replacing the phrase "The design of the continuous transition kernel aligns CoSIM closely with consistency models" with "The design of the continuous transition kernel shares some similarities with consistency distillation (CD)." Thanks again for identifying this ambiguous claim. We will make these modifications in our revision.
Summary: This paper proposes CoSIM a continuous hierarchical variational inference framework for semi-implicit distribution aiming to accelerate the sampling speed of diffusion models. The main contribution of this paper includes: * The authors propose a score-based diffusion distillation approach demonstrating superior performance on DINOv2 metrics comparing to data-free diffusion distillation algorithms with 4 sampling steps across multiple datasets (CIFAR-10, ImageNet 64×64, and ImageNet 512×512). * The authors theoretically connects the Hierarchical Semi-Implicit Variational Inference (HSIVI) frame work with the Score-identity Distillation (SiD) and propose a regularization technique to address the bias of the fake score network in the score matching loss. * The authors demonstrate that the generator network of SiD is intrinsically a consistency function and prove that a few-step sampler could potentially be a sweet spot in the tradeoff between iterative refinement sampling and error accumulation in multi-step sampling ## update after rebuttal: See the comments below Claims And Evidence: This paper overall supports the claims with clear evidence. However, my major concerns lies in the following two aspects: * The authors mentioned in line 199-202 that when $\nabla \log q_{\phi} (x_s;s,t)$ deviates significantly from the target score model $S_{\theta}(x_s;s)$, this initialization strategy becomes inefficient for the second-stage optimization in equation(10). It does not seem very clear to me what is the bias in the SiD loss suggest by the authors and how this is connected to the regularization, I would appreciate more elaboration from the authors on this, either theoretically or through ablation studies on $\lambda$. * The authors mentioned in line 167-169 that the fused loss function exhibits pathological behavior when $\alpha>1$, I guess this is a typo? Otherwise it would be interesting to ask why the authors follows $\alpha = 1.2$ in the implementation of CoSIM. Given the combination of $\lambda$ and $\alpha$, it would also be interesting to see the ablation studies on $alpha$ if time permits. Methods And Evaluation Criteria: The methods and evaluation make sense to me. Theoretical Claims: The proof for Theorem B1 Proposition 3.2 and 3.4 make sense to me. My major concern lies in the statement for Proposition 3.4 and assumption 3.3, $\epsilon_f$ may be inflated here due to the regularization, so it would be good to show that how regularization benefits the training of CoSIM. Experimental Designs Or Analyses: I noticed that CoSIM 4-step differs by only 0.06 from SiD on ImageNet 64×64, yet shows a significant 20-point improvement in FID DINOv2. Interestingly, CoSIM 2-step differs by a larger margin of 1.83 from SiD on ImageNet 64×64, while maintaining the same 20-point improvement in FID DINOv2. I would appreciate intuition on this observation. Supplementary Material: Yes, I reviewed the derivation, implementation details and qualitative examples. Relation To Broader Scientific Literature: The key contributions of this paper can be related to accelerating the inference of diffusion models and enriched the tools in hierarchical variational inference and provide a novel perspective on Score-identity distillation for diffusion models. Essential References Not Discussed: Related works that are essential to understanding the key contributions of the paper are cited. Other Strengths And Weaknesses: The paper provides solid theoretical derivation backed with extensive experiment. However, I think the Section 3.1 and 3.2 essentially re-derives the SiD loss for a few-step sampler, which is limited the novelty. Though the derivation through Legendre-Fenchel Dual provide insights on the loss function of SiD, the improvement comparing to the SiD bi-level algorithm optimization is limited. The derivation on the error bounds for multi-step sampling could provide insights on few-step diffusion acceleration. The contribution of this paper is solid yet the overall significance and novelty is moderate. Other Comments Or Suggestions: Please see the above sections. Questions For Authors: Please see the Claims And Evidence section. It would be helpful if the authors could elaborate more on the $f_{\psi}$ shifting. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and valuable feedback. We address your specific questions and comments below. `Q1`: It does not seem very clear to me what is the bias in the SiD loss ... on $\lambda$. `A1`: Thank you for your valuable suggestion! The significance of our regularization comes in three aspects: - Regularizing the second-stage optimization of (10) (as discussed in Theorem 3.1) helps guide the optimal $f\_{\tilde{\psi}^*(\phi)}$ towards the pretrained score function $S\_{\theta^*}(\cdot;s) \\approx p(\cdot;s)$. And the Nash equilibrium of $\\phi^*$ remains consistent as in Theorem 3.1. - Our formulation strictly contains SiD loss as a special case, which is equivalent to $\alpha = 1.2$ and $\alpha(1+\lambda) = 0.5$. This introduces a potential bias when $f_\psi$ is perfectly trained and $\alpha$ is too large. As shown in (38), this causes $q_\phi$ to optimize in a direction that deviates from the target distribution, corresponding to the failure case in Figure 7 of SiD. Our formulation allows for a larger $\lambda$, but still ensuring positive coefficient $(1+2\lambda)$ in (38), mitigating the potential bias in SiD loss. - The ablation study on $\mathrm{coef} := \alpha(1+\lambda)$ in (15) with $\alpha=1.2$ show that introducing regularization facilitates the learning of $f_\psi$, improving the training of $q_\phi$. Table1: Class-conditional ImageNet (512x512) for CoSIM-L given **$\alpha=1.2$**. |$\mathrm{coef}$ \\# of total training images | 204k| 1024k | 2048k | 4096k| |-|-|-|-|-| |$\mathrm{coef}=0.5$ with NFE=$2$| 309.7 | 101.3 | 69.6 | 61.5 | |$\mathrm{coef}=1.5$ with NFE=$2$| 421.9 | 95.2 | 58.6 | 49.2 | `Q2`: The authors mentioned in line 167-169 that the fused loss function ... on $\alpha$ if time permits. `A2`: The pathological behavior when $\alpha > 1$ refers to the SiD loss which is equivalent to $\mathrm{coef} := \alpha(1+\lambda) = 0.5$. We propose using a regularization term by increasing $\lambda$ to mitigate this issue, as discussed in A1. The ablation study below with a larger $\alpha=2.0$ shows the effectiveness of our regularization and the pathological results of SiD loss is similar to the observations shown in Figure 7 in SiD's paper. Table2: Class-conditional ImageNet (512x512) for CoSIM-L given **$\alpha=2.0$**. |$\mathrm{coef}$\Iteration | 204k| 1024k | 2048k | 4096k| |-|-|-|-|-| |$\mathrm{coef}=0.5$ with NFE=$2$| 374.8 | 3960.0 | 4129.3 | 4039.9 | |$\mathrm{coef}=2.5$ with NFE=$2$| 619.0 | 114.1 | 78.7 | 64.9 | `Q3`: My major concern lies in the statement for Proposition 3.4 and assumption 3.3 ... regularization. `A3`: $\epsilon_f$ measures the gap between $f_\psi$ and its optimal $f_{\psi^*}$ in the second optimization problem described in (12). This optimization is a quadratic problem, and in the optimal case, $\epsilon_f$ approaches 0. Therefore, the introduction of regularization does not affect $\epsilon_f$'s convergence to 0. Moreover, when $f_\psi$ is initialized as $S_\theta$, regularization brings $f_\psi$ closer to the optimal $f_{\psi^*}$, as shown in (16). `Q4`: I noticed that CoSIM 4-step differs by only 0.06 ... observation. `A4`: FID and FD-DINOv2 are designed in different numerical scales. FD-DINOv2 was designed more suitably for diffusion model evaluation. Results below are some well-known diffusion models evaluated for conditional image synthesis on ImageNet $256\times 256$. | Metric \ Model| ADM | LDM | |-|-|-| | FID | 4.59 | 3.60 | | FD-DINOv2 | 203.45 | 112.40 | Furthermore, FID was introduced in 2017 and does not have enough numerical range to differentiate strong modern diffusion models. FD-DINOv2 was introduced to provide a larger numerical gap to solve this issue. `Q5`: The contribution of this paper is solid yet the overall significance and novelty is moderate. `A5`: Thank you for acknowledging the novelty of theoretical derivation of the fused loss and insight of regularization. We believe that CoSIM contributions go beyond SiD in the following aspects: - We theoretically introduce a regularization framework of the SiD loss to stabilize the training and solve the pathological behavior under large alpha values. - We propose a continuous HSIVI training scheme. With a specific parametrization, we derive a novel training method for consistency models at a distributional level. - Experimentally, our method performs better than the state-of-the-art consistency model sCD [1], using much less training budget (ours 20M compared to sCD 819.2M). Table3: Sample quality measured by FID on class-conditional ImageNet (512x512). |Model|#params|NFE|FID| |-|-|-|-| |sCD-S| 280M | 2 |2.50| |sCD-M| 498M | 2 |2.26| |sCD-L| 778M | 2 |2.04| |CoSIM-S| 280M| 2 | 2.66| |CoSIM-S| 280M| 4 | 2.56| |CoSIM-M| 498M| 2 | **1.95**| |CoSIM-M| 498M| 4 | 1.93| |CoSIM-L| 778M| 2 | 1.84| |CoSIM-L| 778M| 4 | **1.83**| [1] Lu Cheng et al. "Simplifying, stabilizing and scaling continuous-time consistency models."
null
null
null
null
null
null
SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?
Accept (oral)
Summary: The paper introduces a benchmark of 1488 software engineering freelance tasks consisting of managerial and coding tasks to evaluate LLM performance against real world tasks. The benchmark helps get a clearer picture about the potential social and economic impacts of AI. For individual contributor/coding tasks, comprehensive e2e tests are developed and a user tool is provided to mimic developer's actions associated with the task in a browser. Performance of various LLMs is measured as Pass@1, earning, and earning rate. Reported results show a significant gap between SOTA LLMs' performance and what human can achieve. This gap is more spelled out for individual contributor tasks where pass@1 remains below 30% versus managerial tasks for which this value gets up to 45%. Claims And Evidence: The main claim of the paper is raising more awareness about LLMs potential in automating real-world software engineering tasks to get a clearer picture of their social and economic impacts. The claim is clear and is backed up by the reported experimental results. Methods And Evaluation Criteria: The prepared benchmark based on Upwork and the evaluation criteria to gauge the LLM performance makes sense. Theoretical Claims: There is not much theoretical claim in the paper so this box is less applicable. Experimental Designs Or Analyses: The experimental design looks sound, e.g. authors are mindful of the possible contamination issues and have some mitigation plans. The analysis accompanied the experimental results including the ablation studies looks sound as well. Supplementary Material: I glanced through the supplementary material and section A.8 and concrete prompts are useful in particular. Relation To Broader Scientific Literature: Studying AI/LLM performance against real-world tasks that are tied with monetary reward is a critical and relevant topic to the broader scientific literature since they can adjust expectations about what can versus cannot be expected from AI and ground everyone's perception of AI's potential. Essential References Not Discussed: There are certain relevant and seminal works that are not cited in the paper. For example, Evaluating Large Language Models Trained on Code by Chen et al. and Program Synthesis with Large Language Models by Austin et al. are among such cases. Other Strengths And Weaknesses: **Strengths** - The comprehensive benchmark of real-world software engineering tasks with curated e2e tests that are tied with monetary rewards is a valuable asset for the research community. On top of that, the experimental results and providing a realistic picture of AI's capability in this domain is extremely valuable and will trigger valuable discussions in the research community. **Weaknesses** - The weakest aspect of the paper is the literature review section. Authors could have done a more comprehensive study of the existing literature. More importantly, they could have compared their learning from reported experiments in their work against some of the claims or experimental results reported in other existing works. Other Comments Or Suggestions: No other comments/suggestions. Questions For Authors: - After reading the paper I'm still not clear how the user tool is helping. The paper says "The model does not receive any feedback about whether the action was successful" it also says "a text-based trajectory and a series of screenshots are written to its working directory when the tool finishes.". Should I assume by "action was successful" authors mean whether e2e test passes successfully? - Still related to user tool, based on Figure 9, it looks like user tool is not helping GPT-4o much, can authors explain why that's the case for GPT-40 but not O1. - One thing that I could not follow is whether managerial tasks are also collected from Upwork, the idea of having a few proposals and letting LLM decide which one is more appropriate is not aligned with my understanding of the tasks on a platform like Upwork. Maybe my knowledge is limited but can authors clarify? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your careful and thoughtful review! We’ll address your points below. > There are certain relevant and seminal works that are not cited in the paper. For example, Evaluating Large Language Models Trained on Code by Chen et al. and Program Synthesis with Large Language Models by Austin et al. are among such cases. Thank you for this flag. We do include Chen et al.’s paper, HumanEval, in the Related Work section, but we will cite directly and add to Table A.4. We sincerely apologize for not citing Austin et al. and will include it in the camera-ready version if accepted to the conference (as per ICML rules we cannot update the submitted PDF in response to rebuttals). To be more comprehensive, we also plan to add: - Lu et al. (2021) – CodeXGLUE - Li et al. (2022) – AlphaCode - Nijkamp et al. (2022) – CODEGEN - Lai et al. (2023) – DS-1000 Please let us know if there are others we should include. We sincerely apologize for our oversight and appreciate you pointing this out. > More importantly, they could have compared their learning from reported experiments in their work against some of the claims or experimental results reported in other existing works. Great point! In terms of comparing learnings: - For IC SWE tasks, one comparable benchmark is SWE-Bench Multimodal, where GPT-4o solved 12.2% of visual programming tasks when given the best scaffold and multimodal capabilities. This is comparable to our GPT-4o scoring 8.0% on our IC SWE Diamond tasks when the model can ingest image screenshots via the user tool. - On SWE-Bench Verified, the industry standard benchmark for software engineering, GPT-4o currently scores 38.8% using the best available scaffold, but only 8% on SWE-Lancer Diamond IC SWE – likely because SWE-Bench Verified focuses on bugfixes, and SWE-Lancer includes open-ended problems (adding new features, reliability) and full-stack development. This highlights the need for more challenging, unsaturated SWE benchmarks. - SWE-Lancer is unique in evaluating engineering management skills – we did not identify a direct analogue in prior works; we welcome any suggestions for related work. We will include these comparisons in the camera-ready version of our paper if accepted to ICML. > After reading the paper I'm still not clear how the user tool is helping. The paper says "The model does not receive any feedback about whether the action was successful" it also says "a text-based trajectory and a series of screenshots are written to its working directory when the tool finishes.". Should I assume by "action was successful" authors mean whether e2e test passes successfully? We’re happy to clarify! When the model calls the user tool, it automatically performs actions relevant to the task on the website (specifically, by using the Playwright library to simulate user actions), and sends the model a report (screenshots + a text-based log of browser states). The user tool does NOT provide feedback to the model on whether it passed or failed the end-to-end tests, but it does allow for iterative debugging and verification of hypotheses. We created the user tool to mirror how humans iteratively check their code by running it and inspecting the output. > Based on Figure 9, it looks like user tool is not helping GPT-4o much, can authors explain why that's the case for GPT-40 but not O1. Definitely! Smarter models are able to “get more out of” the user tool. GPT-4o often fails to properly use the user tool. For instance, as we outline in section A.9. of the appendix, the model fails to review the user tool logs that indicate the actual behavior. > One thing that I could not follow is whether managerial tasks are also collected from Upwork, the idea of having a few proposals and letting LLM decide which one is more appropriate is not aligned with my understanding of the tasks on a platform like Upwork. Happy to clarify! The tasks in this benchmark were posted on both Upwork and GitHub. When a task is posted (e.g., “solve X bug”), interested freelance software engineers are required to first submit proposals for solving the task on the corresponding Github issue (see the Expensify/App GitHub repository for examples). A freelance engineering manager then reviews all proposals and chooses the best one. The engineer whose proposal is chosen then gets to actually implement the solution. For each task with multiple proposals, we created a “managerial task” by collecting all the freelancer proposals, having models choose the best proposal, and comparing model choices to the real manager’s. You are right that not all Upwork software engineering tasks entail a proposal selection process – this just reflects Expensify’s public vetting process. Thank you again for your valuable insights. We hope we have addressed your concerns and respectfully invite you to consider raising your score if you feel the benchmark has improved as a result of these clarifications. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for addressing my comments, providing further clarifications/explanations, and their planned updates for the related work section. I keep my "accept" recommendation.
Summary: This paper introduces SWE-Lancer, a benchmark for evaluating language models' capabilities in real-world software engineering tasks. The benchmark comprises 1,488 freelance software engineering tasks from Upwork, collectively valued at $1 million USD in actual payouts. SWE-Lancer includes two distinct task categories: (1) Individual Contributor (IC) SWE tasks where models generate code patches to resolve real-world issues, and (2) SWE Manager tasks where models select the optimal implementation proposal from multiple options. Unlike previous benchmarks, SWE-Lancer evaluates performance across the full technology stack with end-to-end tests created by professional software engineers. Claims And Evidence: The claims made in the submission are well-supported by evidence: - The claim that SWE-Lancer provides a more realistic evaluation of SE capabilities is supported by the detailed description of task selection, construction, and verification processes. The use of real freelance tasks with actual monetary values and end-to-end testing provides stronger evidence than previous benchmarks. - The performance claims for different models are backed by experimental results presented in tables and figures, showing pass rates and earnings across different task types, categories, and conditions. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand: - The division into IC SWE and SWE Manager tasks allows for evaluation of different aspects of software engineering capabilities. - Using real-world freelance tasks with actual monetary values provides a economic significance to the evaluation. - The end-to-end testing approach is a significant improvement over unit tests used in previous benchmarks. - The evaluation of models under different conditions (reasoning effort, number of attempts, with/without user tool) provides insights into factors affecting performance. Theoretical Claims: The paper does not make theoretical claims that require formal proofs. Experimental Designs Or Analyses: - The comparison across multiple frontier models (GPT-4o, o1, Claude 3.5 Sonnet) provides a good overview of current capabilities. - The ablation studies on reasoning effort, number of attempts, and user tool access offer insights into factors affecting performance. - The reporting of both pass rates and earnings gives a more complete picture of model performance. Supplementary Material: I reviewed the supplementary material (appendices), which contains information about the agent framework, test verification process, task composition, and example trajectories. Relation To Broader Scientific Literature: It connects to broader research on automated software engineering, agentic systems, and the economic impact of AI. Essential References Not Discussed: The paper appears to cover the relevant prior work in software engineering benchmarks and LLM evaluation. Other Strengths And Weaknesses: Strengths: **Real-world relevance**: Using actual freelance tasks with real monetary values provides a realistic evaluation of LLM capabilities. **End-to-end testing**: The use of comprehensive end-to-end tests rather than unit tests is a improvement over previous benchmarks. **Economic dimension**: Mapping performance to monetary value adds an important perspective on the practical impact of AI capabilities. **Comprehensive evaluation**: The paper evaluates multiple models under various conditions, providing a rich understanding of current capabilities and limitations. Weaknesses: **Limited description of the agent framework**: Although the details of the agent framework are mentioned in the appendix of the paper, it is still difficult to fully understand its implementation. Other Comments Or Suggestions: The paper would benefit from a more detailed discussion of the agent framework in the main text rather than relegating it to the appendix. Questions For Authors: - Have you analyzed whether there are systematic patterns in the types of tasks that models fail to solve correctly? This could provide valuable insights for targeted improvements in model capabilities. - Given that SWE Manager tasks showed higher success rates than IC SWE tasks, do you believe this reflects a genuine difference in capability, or could it be related to the evaluation methodology (selection from options vs. generation)? - How might the benchmark be extended to evaluate human-AI collaboration more directly, beyond the current user tool experiments? This seems like an important direction given that collaborative scenarios may be more realistic in practice than fully autonomous AI software engineers. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your careful and thoughtful review! We’ll address your points below. > Limited description of the agent framework: Although the details of the agent framework are mentioned in the appendix of the paper, it is still difficult to fully understand its implementation. The paper would benefit from a more detailed discussion of the agent framework in the main text rather than relegating it to the appendix. Thank you for flagging! We included information about our agent harness in the appendix to satisfy the ICML page limit requirements. We’ve also open sourced eval-harness code so anyone can plug in an API key and run our eval. We are happy to update the main paper with more details about the agent framework in the methodology section in the camera-ready version if helpful. Are there specific aspects of the agent framework you’d like us to share details on? We are happy to clarify them here if helpful! > Given that SWE Manager tasks showed higher success rates than IC SWE tasks, do you believe this reflects a genuine difference in capability, or could it be related to the evaluation methodology (selection from options vs. generation)? Great question! Our manager tasks involve selecting from predefined options, and the base rate of selecting a correct answer by chance in a multiple-choice setting is higher than that for an open-ended question. Therefore, the “lower bound” for what a model could achieve is by definition higher in our management tasks than our IC tasks that involve writing code. Still, the management success rate on our evaluation is much higher than can be attributed to chance alone, and the difference in success rates is meaningful (management success rates are more than double that of IC SWE tasks), suggesting there is also a capability difference at play. Qualitative analysis of trajectories also indicates that models we tested are stronger in conceptual thinking required for management tasks versus in full hands-on coding, where they are more likely to make a mistake. A potential future improvement to our evaluation is to create a more open-ended management test to ensure measured differences are due to capability; however, given that the actual tasks for managers in real life in our dataset required managers to select proposals from predefined options, we kept this grading schema to better match reality. It’s also worth noting that manager tasks in SWE-Lancer do not capture all management skills, which also include defining project timelines, deciding resource allocation, and making strategic/organizational plans. > How might the benchmark be extended to evaluate human-AI collaboration more directly, beyond the current user tool experiments? This seems like an important direction given that collaborative scenarios may be more realistic in practice than fully autonomous AI software engineers. Excellent question - we agree this is an important direction for future work! We have open-sourced our evaluation harness and the diamond split of our dataset, and we encourage the community to build upon this to further explore human-in-the-loop approaches. We regret that we had limited time and resources while building the evaluation, and were not able to explore this exciting direction. Thank you again for your valuable insights. We hope we have addressed your concerns and respectfully invite you to consider raising your score if you feel the benchmark has improved as a result of these clarifications. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. I will maintain my original positive score!
Summary: This paper introduces SWE-Lancer, a benchmark of 1488 real-world freelance software engineering tasks from Upwork valued at $1million USD in actual payouts. The benchmark includes both Individual Contributor (IC) tasks where models generate code patches to fix issues, and Software Engineering Manager tasks where models select the best implementation proposal. Tasks are evaluated using end-to-end tests with experiments showcasing that even the best performer (Claude 3.5 Sonnet) solves only 26.2% of IC tasks and 44.9% of Manager tasks on the public Diamond set, indicating these real-world tasks remain challenging despite recent progress in coding capabilities. Claims And Evidence: The key claims are well-supported by the comprehensive methodology and results! However, I would appreciate if the authors could ammend: 1. In Line 134 to 144 the authors claim, that previous benchmarks were biased (eg, SWE-Bench involved selecting PRs with unit tests), while SWE-Lancer follows "unbiased data collection." However, this doesn't accounts for already biased distribution of tasks present in Expensify and UpWork in general (line 406-411). Therefore, the same should be clarified upfront while describing the SWE-Lancer unbiased. Methods And Evaluation Criteria: The evaluation approach is sound: 1. The use of end-to-end tests rather than unit tests is particularly valuable, as it replicates real-world behavior of human reviewers. 2. Mapping model performance to real monetary value provides a meaningful economic context that goes is more meaningful than accuracy metrics. 3. Including management tasks addresses an important aspect of software engineering that previous benchmarks have neglected. Theoretical Claims: NA Experimental Designs Or Analyses: The experimental design is rigorous with comprehsnive evaluation on multiple frontier models. A minor limitation however is that the benchmark draws exclusively from one repository (Expensify) and one platform (Upwork), which may limit generalizability across different domains of software engineering. Supplementary Material: I have read the appendix Relation To Broader Scientific Literature: The paper thoroughly positions SWE-Lancer relative to existing benchmarks articulating clearly how SWE-Lancer differs from previous benchmarks like SWE-Bench by focusing on real-world, full-stack commercial engineering rather than isolated programming tasks. The comprehensive comparison table in Appendix A4 also clearly highlights the unique contributions of SWE-Lancer. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. The benchmark's economic valuation framework provides a novel and intuitive way to assess the practical impact of model capabilities. 2. The inclusion of both implementation and management tasks offers a more generalizing view and evaluation of software engineering abilities. 3. The rigor in experiments and analysis is commendable. 4. The authors open-source the benchmark and environment, which is great for future research. Weaknesses: Apart from points mentioned above, 1. The end-to-end tests may still have limitations, such as them being static, with limited guarantees on coverage. 2. Similar to prior benchmarks, the benchmark still primarily contains questions related to bug fixing (~90% of tasks), compared to new features, maintenance, etc. (Table 3) 3. The benchmark is very expensive to run in terms of computational cost and time, limiting immediate adaptation by the community. Any specific efforts to address the issue would have made the impact of SWE-Lancer stronger. Other Comments Or Suggestions: The paper is well-written and is a valuable contribution to improve SWE agents further. However, it would have been great, if authors had made specific efforts in increasing the accessibility of the dataset, given the high cost required to evaluate on SWE-Lancer. Questions For Authors: 1. How might the distribution of tasks in SWE-Lancer differ from those encountered by full-time software engineers rather than freelancers? Does this affect the benchmark's representativeness of general software engineering work? 2. What analysis have the authors done on the characteristics of tasks that models find particularly challenging versus those they solve more easily? Are there patterns in task types, or importantly domains that predict model performance? 3. In prompts, authors provide "The reward for the correct fix is ${price}." Given prior research has shown artificially incentivizing LLMs with monetary rewards can improve their performance, have authors evaluated artifically varying price to see change in model performance? For instance, could performance on easier tasks, been decreased simply because of lower amounts provided in the prompt? 4. How does the benchmark account for multiple valid solutions to the same problem? Especially for more complex tasks, there may be various approaches with different trade-offs that could all satisfy the end-to-end tests? 5. Have authors seen cases, where the models pass the end-to-end tests, but are still not fully correct? 6. Have you considered extending the benchmark to include aspects of software engineering beyond correctness, such as code quality, maintainability, or adherence to best practices? If not, this should be part of limitation, so the community does not make incomplete conclusions from the paper. 7. In Lines 620-621, authors mention: "If these three categorizations were not unanimous, we manually assigned the final categories." it would be interesting to see how often the manual verification was done? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your generous review and strong endorsement! Below we address your points. > Unbiased data collection vs. platform bias Great point! SWE-Lancer focuses on a single repository and freelance tasks, so it isn’t free from platform bias. By ‘unbiased data collection,’ we mean we didn’t selectively pick tasks or rely on pre-written tests; we took tasks as posted on Upwork and had professional engineers write E2E tests. We will clarify that ‘unbiased data collection’ refers to minimal artificial filtering, not the absence of any underlying biases. > Most tasks are bugfixes This reflects the distribution of the real-life Expensify repo, as it is a mature production system where most tasks involve debugging and incremental improvements. With SWE-Lancer, we set out to ‘containerize’ a company in an unopinionated manner, so we didn’t filter for task type or ease of testing. We acknowledge that one drawback is that the distribution may not be ideal or interesting; however, it ensures that our dataset is realistic. > Benchmark is expensive to run We recognize that running SWE-Lancer is resource-intensive, as there are 1488 tasks which each run in their own VM. We open-sourced our evaluation harness so researchers can select smaller subsets to manage cost (e.g., some researchers run 5–10 SWE-Lancer tasks locally). In response to your feedback, we have also updated our open-sourced repository with a SWELancer-Lite split with 174 tasks (vs. the 502 originally in the Diamond set). > SWE-Lancer task distribution Good question! We note in the limitations section that freelancer work tends to be more self-contained and well-scoped than full-time software engineering work. Furthermore, SWE-Lancer would be less representative of SWE work that is going “zero to one,” as even the tasks involving adding a new feature are building off of a well-established and maintained codebase. We will be sure to elaborate on this in the camera-ready version. > Characteristics of challenging tasks We analyzed pass rates by domain and task types (Tables 2 and 3 of paper) as well as individual rollouts. Overall, models performed much better on SWE Manager tasks than IC SWE tasks, likely because qualitative analysis is easier than execution. Models tend to perform better on IC SWE bugfix tasks as those are easier to root-cause vs. more open-ended tasks (adding new features) or reliability changes that span more of the codebase. All models were worse at IC SWE tasks involving application logic, and GPT-4o and o1 in particular performed worse than Claude 3.5 Sonnet on UI/UX tasks. > Artificial variation of price in prompt Great question. We found that omitting price information did not materially impact performance, so we decided not to pursue this direction further; however, in response to your feedback, we will directly evaluate sensitivity to the price offered in prompt as a new experiment for the camera-ready paper, if accepted. > Multiple valid solutions to the same problem This is a great question, and a challenge for SWE benchmarks. To address this, we employ end-to-end tests, which are more flexible than unit tests because they are agnostic to the specific implementation methodology and pass as long as the end behavior is correct. Of course, the model’s implementation could still deviate from the tests in a way that they fail to account for. To mitigate this, we allow the model to invoke a User Tool, which opens the browser and performs the action associated with the task (e.g. uploading a profile picture). You can read more about the User Tool in section 2.2.5 of our paper. > Cases where models pass E2E tests but are still not fully correct We have manually reviewed rollouts and haven’t observed this; however, the possibility of cheating and/or grading errors is a concern for every benchmark, so we cannot rule this out. We find that end-to-end tests are significantly harder to cheat on compared to unit tests, as it is significantly harder to fake correct behavior when the entire system is involved. We include examples of unit test hacking occurring in SWE-Bench in the Appendix to illustrate some of the advantages of end-to-end testing. > Extending beyond correctness We agree this is an exciting future direction, and will mention this explicitly under Future Work for the camera ready version of our paper. We did not want to make the evaluation too complex to run, nor the results too difficult to interpret, and imbuing notions of code quality and maintainability into the benchmark would have contributed significantly to its complexity. We believe that determining the best way to test for these is an open research question, and would encourage others to leverage our dataset and open-source codebase to experiment with more complex evaluation setups. > Frequency of manual assignment Manual assignment was performed in 15% of cases. Thank you again for your encouraging feedback!
Summary: The paper introduces a benchmark SWE-Lancer using 1488 freelancing tasks from Upwork. The benchmark offers many advantages as compared to the existing SWE benchmark: connects solving an SWE task directly to economic benefits, more challenging problems, a diverse data set including UI/UX tasks. There are two types of tasks: i) Individual Contributor (IC) tasks, where models generate code patches to resolve issues ii) Software Engineering (SWE) Manager tasks, where models select the best implementation proposal Apart from proposing the benchmark, they tests vLLMs on these tasks and found that the best model, Claude 3.5 Sonnet also resolves only 40.9% of the issues. Claims And Evidence: Yes, the claims are generally supported by the evidence. It would be good if authors could do multiple runs and provide confidence intervals. The LLMs are quite stochastic and concluding from a single number is misleading. Methods And Evaluation Criteria: The evaluation criteria used are pass @1, pass @k and real-world economic benefit. Yeah, these make sense for the concerned applications. However it would be better if authors could also propose metrics that can evaluated based on the partially correct trajectory as well, i.e., the issue is not resolved but somewhat have done something reasonable. This will be beneficial for understanding where do current model stands. Theoretical Claims: Experimental paper -- No theoretical claims Experimental Designs Or Analyses: Yes, I checked them namely, increasing the number of attempts, test time computation and removing tool usage. Supplementary Material: Partly Relation To Broader Scientific Literature: The key contribution of proposing a benchmark of more complex and realistic problems is beneficial for the community. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - The paper is clearly written and easy to parse - I believe the contribution of adding more realistic problems is quite strong - They employed professionals (human annotated data) for writing and verifying end to end tests - Experiments are conclusive Weakness: - tasks are only from the Expensify repository. This limitation it quite strong IMO. - Would be good to include multimodal data - Motivation of introducing managerial task is rather weak. I think a benchmark just with IC would also be valuable for community - The associated costs are dynamically priced, however, in actual deployment LLM's would solve it immediately (for the lowest price) ---- i believe it would not wait for weeks to get price hiked up? Other Comments Or Suggestions: line 86: explain diamond set when it occurs for the first time. the paper mention about bias in SWEbench due to having unit tests in them; however i believe there is also strong bias is swe-lancer due to only having test based data, that too from expensity. I think getting rid of bias is difficult and my point is to not as a benefit or a disadvantage (line 134:144). Also, I think on page 1-3, the advantages of swelancer is a bit verbose with some repeats and perhaps can be shortened. ### Update after rebuttal All my questions have been addressed. I also agree with the motivation to include managerial tasks. For authors: Please include the promised changes in the final version. I have changed the score to accept. Questions For Authors: Do you know the spit of tasks based on coding language as well? That would be helpful as well. Do you have hints or trends that you may have observed in execution that shows if existing LLM is trained on similar data? Did you also try some open-source models on these tasks? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for this thoughtful review! We appreciate your feedback and address your points below. > Multiple runs and CIs Excellent point. In response to your comment, we performed 3 runs of GPT-4o and o1 on the IC SWE and SWE Manager Diamond subsets to provide confidence intervals in the camera-ready version if accepted. We are still running Claude (which takes longer due to rate limits). We calculate 95% CIs using the standard bootstrap procedure that resamples model attempts per problem to approximate the metric’s distribution. Our results are below: - GPT-4o (IC SWE Diamond): avg pass@1 = 8.0% (95% CI [4.2%, 11.4%]) - o1 (IC SWE Diamond): avg pass@1 = 17.5% (95% CI [13.1%, 21.9%]) - GPT-4o (SWE Manager Diamond): avg pass@1 = 35.1% (95% CI [30.7%, 40.0%]) - o1 (SWE Manager Diamond): avg pass@1 = 40.8% (95% CI [35.3%, 46.0%]) > Partial progress metrics Great idea! In response to your feedback, we updated the grading and reporting modules of our open source scaffold to report both full and partial credit. We hope this will allow external researchers to experiment with this idea more easily! > Single repository We appreciate this concern and recognized it as a key limitation in our paper. Still, we believe our contribution provides a strong foundation for expanding to more repositories in the future. We initially attempted a multi-repository approach but realized we needed to de-risk our process on a single sufficiently diverse repository first. We chose Expensify for its range of apps (e.g., web, iOS, Android, desktop), tasks (UI/UX, application logic, server-side logic, system-wide reliability improvements, managerial tasks), and maturity (Expensify serves millions of customers). Building our benchmark helped us create a scalable task-creation process to tie LLM performance to economically valuable work. By open-sourcing our methodology and code (with modular components that are applicable across repos), we hope community contributors can now expand to additional codebases. > Multimodal data Great point! Our environment already supports image screenshots via the User Tool. In response to your comment, we added a utility to the open-source SWE-Lancer codebase to automatically download each task’s video – along with README instructions to enable easier video task creation in the future. Let us know if this addresses your concerns! > Motivation for managerial tasks While many believe LLMs will soon automate IC coding tasks, others argue managerial tasks (like oversight of proposed code changes) will still require human supervision (see e.g. https://www.wsj.com/articles/how-ai-tools-are-reshaping-the-coding-workforce-6ad24c86). In the Expensify repository, managers—who review and select the best technical proposals—earn the same as the freelance engineers implementing the solutions, and we wanted to reflect the full range of repository tasks. Results on management tasks are also unique in helping evaluate the “management is protected from LLMs” hypothesis. For those interested exclusively in the IC SWE tasks, we report these metrics separately in our open source code. We will clarify this in the camera-ready version if accepted. >Dynamic pricing Great question! You’re right that a sufficiently powerful model might solve certain tasks right away – effectively preventing price hikes altogether. But that’s exactly the disruption our paper sets out to measure. Our benchmark reflects how tasks are priced in today’s freelancer markets, where unresolved jobs see their rates climb over time. By quantifying this status quo and then applying LLMs to it, SWE-Lancer demonstrates the extent of the economic impact these models would have. > Diamond Set Thank you. We will clarify earlier that the “Diamond” subset is the publicly released subset worth $500,800 in payouts; the rest is a private holdout. > Bias We mitigate test-based bias with human-written E2E tests that mirror realistic user flows, rather than unit tests from the original codebase (biased towards code people chose to write tests for). Still, all benchmarks have biases, so we encourage using multiple real-world benchmarks to capture a broader range of capabilities. > Language split All of the coding tasks are in JavaScript, TypeScript, and React or React Native. > Data contamination Because these tasks are from a public repository, we checked for data contamination by comparing pass@1 on tasks published before vs. after model knowledge cutoff dates (Table A.2) and found no evidence of memorization. > Open-source models We haven’t tested open-source models yet but plan to run Llama and DeepSeek for the camera-ready version. We’ve also open-sourced our evaluation harness and the Diamond Set to allow others to test models. Thank you again for your valuable insights. We hope we have addressed your concerns and respectfully invite you to consider raising your score if you feel the benchmark has improved as a result of these clarifications.
null
null
null
null
null
null
Vector Grimoire: Codebook-based Shape Generation under Raster Image Supervision
Accept (poster)
Summary: This work focuses on SVG generation. Its model consists of two modules: a visual shape quantizer learns to map raster images onto a discrete codebook by reconstructing them as vector shapes, and an auto-regressive Transformer model jointly learns the distribution over shape tokens, positions and textual descriptions. Claims And Evidence: There is no comparison with IconShop on FIGR-8. This work does not use ground-truth paths as the supervision while Iconshop uses ground-truth paths. If there is no such direct comparison, it is impossible to identify the contribution of the proposed work: a) it achieves better performance than the one uses ground-truth path. If this is the case, the contribution is very significant, i.e., changing the traditional paradigm of SVG generation; or b) it does not achieve better performance than the one uses ground-truth path. If that is the case, the contribution simply lies on avoiding using ground-truth paths. Methods And Evaluation Criteria: There is no qualitative comparison between baselines and the proposed method. The quantitative comparison makes sense. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: In the comparison to the SDS-based methods, the chosen baselines are not the most advanced ones [1][2][3] and the results of SDS-based methods look much worse than what are reported in the original paper. I suggest the authors make more explanation for it. [1] https://arxiv.org/pdf/2312.16476 [2] https://arxiv.org/pdf/2405.10317 [3] https://arxiv.org/pdf/2411.16602 (optional as it is too close to the submission deadline) Supplementary Material: I check the post-processing part. Relation To Broader Scientific Literature: It introduces a new paradigm of training SVG generation model which does not rely on SVG data (which requires lots of data for training) and is not SDS-based (which can be very slow). The idea of splitting the image to patches in VSQ stage is smart. Essential References Not Discussed: The advanced SDS-based methods are not discussed [1][2][3] [1] https://arxiv.org/pdf/2312.16476 [2] https://arxiv.org/pdf/2405.10317 [3] https://arxiv.org/pdf/2411.16602 (optional as it is too close to the submission deadline) Other Strengths And Weaknesses: no additional comments. Other Comments Or Suggestions: no additional comments Questions For Authors: Does the extension to support stroke width and color prediction mentioned in Sec 5.3 requires re-training of the proposed two modules? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's time and thoughtful feedback on our manuscript. Given the time constraints of this rebuttal, we have focused on addressing the major concerns as follows. --- ## **Vector-based baselines** We have extended our analysis to two vector-supervised methods – **DeepSVG and IconShop** – training them on the same FIGR-8 data used for Grimoire. Unlike Grimoire, DeepSVG supports conditioning only on class identifiers, therefore we assigned a unique identifier to each class in FIGR-8. Upon suggestion of reviewer `f1Fu`, **we have also finetuned Llama 3.2** on FIGR-8 with minimal data preprocessing. We believe this to be an insightful analysis that **shows how tailored tokenization pipelines and extensive data preprocessing are necessary for other vector-supervised models to perform effectively.** Despite raster and vector data providing very different supervising signals, we believe that this analysis ultimately helps better position our method. **Llama**. We fine-tuned Llama (instruction tuning) for three days on eight H100 GPUs. Minimal preprocessing includes rounding up the path coordinates to integer values. Upon inspection, this did not affect the quality of the image. We use the original chat template and included special tokens to delimit the SVG code. **The performance at inference appears very poor**. The model predicts the most recurrent patterns in the dataset, resulting mainly in circular artifacts. **The SVG syntax is, however, correct** most of the time, allowing rendering. **DeepSVG**. We train DeepSVG using the official training script. The model converges within a few hours, but **the results are also not good**, yielding the lowest CLIPScore and FID among all models. **IconShop**. We also re-trained the original IconShop model on the subset of FIGR-8 used in Grimoire. In this case, **the performance of the model is comparable to Grimoire**, resulting in slightly better CLIPscore and FID. All results are reported in the table below. | Model | CLIPScore | FID | Conditioning | Supervision | |-----------|-----------|-------|--------------|-------------| | DeepSVG | 22.10 | 58.03 | Class | Vector | | Llama 3.2 | 25.45 | 38.93 | Prompt | Vector | | Grimoire | 29.00 | 0.64 | Prompt | Raster | | IconShop | 31.18 | 0.40 | Prompt | Vector | --- ## **Additional SDS-based methods** We have added two more recent SDS-based methods (**SVGDreamer and Chat2SVG**) to our qualitative analysis for the final version of our manuscript. We have also included a **new quantitative analysis of all these models.** The point of the results in section 4.5 is not comparing general generative capabilities but highlighting that besides the aesthetically pleasing results, **this family of models falls short in representing a specific target domain** and provides no way to be extended to new data. Making this analysis quantitative is not straightforward. FID score between image distribution is reliable on thousands of samples, but the computational cost of SDS-based models requires up to hours for a few samples (e.g. SVGDreamer) or uses costly proprietary models (e.g. Chat2SVG). We have hence used the PSNR of 20 generated samples from all models. **The results highlight how all models fall short on our dataset distribution.** | Class | Model | Average PSNR (dB) | |---------------|---------------|-------------------| | **User** | CLIPdraw | 28.68 | | | **Grimoire** | **45.19** | | | VectorFusion | 36.62 | | | Chat2SVG | 37.62 | | | SVGDreamer | 34.53 | | **Heart** | CLIPdraw | 28.54 | | | **Grimoire** | **45.66** | | | VectorFusion | 38.54 | | | Chat2SVG | 37.88 | | | SVGDreamer | 34.44 | --- ## **Other questions and observations** > Does the extension to support stroke width and color prediction mentioned in Sec 5.3 require re-training of the proposed two modules? It does require retraining the VSQ. This is also the case for vector-supervised methods. However, a significant difference is that our VSQ prediction heads are very easy to develop or enable and require no further changes, whereas vector-supervised methods like IconShop require redesigning their tokenizer and reprocessing all the data. Finally, if the VSQ must be retrained with some additional modules (e.g., stroke color and width), the rest of the network can be reloaded, speeding up convergence. > "the contribution simply lies on avoiding using ground-truth paths." The main novelty of our work is providing a text-to-SVG framework which learns from data where the vector paths **do not exist**. This is different from simply avoiding existing vector datasets. --- Rebuttal Comment 1.1: Comment: Thanks for the feedback. I would like to learn more comments from other reviewers before making a final decision. I currently have two concerns. First, while I acknowledge that IconShop utilizes SVG path data while the proposed method does not, IconShop still outperforms the proposed approach. This raises questions about the necessity and significance of the proposed method. Second, a key advantage of SDS-based methods is their ability to operate independently of specific data distributions by leveraging prior knowledge from a pre-trained model. In contrast, the proposed method lacks this flexibility.
Summary: This paper introduces a text-guided SVG generation model, i.e., GRIMOIRE, using only raster image supervision. The SVG generation task is formulated as the prediction of a series of individual shapes and positions. The experiments demonstrate the effectiveness of the proposed method. ## update after rebuttal I appreciate the authors' clarifications. Most of my concerns have been addressed by the rebuttal. I would lean to keep my score by involving the additional evaluations and discussions in the revised version. Claims And Evidence: The discussion of the existing SDS-based methods is not very clear. Sec. 2.1 only describes several related works without discussing the difference between the existing works and the proposed methods. Sec. 5.4 only shows visual comparisons with the existing works. It would be better to involve the analysis in the paper, rather than supp. Methods And Evaluation Criteria: The proposed method is a reasonable and effective solution to SVG generation. Theoretical Claims: Yes. Experimental Designs Or Analyses: The current evaluation is not thorough. Only Im2vec is used for comparison. To make the result more convincing, additional existing SVG generation works should be compared and discussed. Supplementary Material: Yes. Relation To Broader Scientific Literature: The proposed method enhances the generation quality over existing raster-supervised SVG models and enables flexible text-conditioned SVG generation. Essential References Not Discussed: No. Other Strengths And Weaknesses: The overall pipeline is a reasonable solution to perform SVG generation. The experiments demonstrate the effectiveness of the proposed method. Other Comments Or Suggestions: It would be better to reorganize the structure of section 5. The first paragraph states two aspects of the result, but there are four subsections here. Questions For Authors: Are there any quantitative evaluations for text-conditioned SVG generation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for taking the time to review our manuscript and providing valuable feedback. --- ## **Discussion on SDS-based Methods** In the final version of the manuscript, we plan to more clearly highlight the differences between SDS approaches and Grimoire at the end of section 2.1 as follows: > “These methods do not involve training for the vector generation process and instead rely on models trained on raster images for other tasks, making them difficult to extend to new data.” Regarding this point, we have also: 1. incorporated two more recent SDS-based methods, **SVGDreamer** and **Chat2SVG**, and will include additional qualitative results in the final version of the manuscript. 2. Added a quantitative comparison of these methods. This analysis has been included in our response to Reviewer `sRe3`. We used **PSNR** to evaluate some generated samples from all the SDS-based models, highlighting how these methods fall short when compared to a specific dataset. We chose not to use **FID**, as it requires a large number of samples to be statistically significant, which was not feasible within the constraints of SDS methods: slow generations e.g. SVGDreamer, expensive inference e.g. Chat2SVG based on Claude APIs. --- ## **Other Questions** > Are there any quantitative evaluations for text-conditioned SVG generation? We have used CLIPScore and FID for text-conditioned generation across the paper. As mentioned above, we have now also added PSNR for the comparison with SDS-based methods. > The first paragraph states two aspects of the result, but there are four subsections here. Thank you for your suggestion on improving the writing in Chapter 5. We plan to revise the beginning of the chapter to more clearly outline its subsections.
Summary: The authors propose a SVG generative model GRIMOIRE which can be conditioned on a text prompt or a partially completed SVG. The primary innovation in the paper is training a VQ-VAE which tokenizes patches of rasterized SVGs into discrete tokens which crucially can be reconstructed into SVG primitives (primarily bezier curves). This tokenizer can be trained end-to-end on purely raster images by a clever use of a differentiable renderer (DiffVG) in the VQ-VAE decoder which enables backpropagting through the L2 pixel space reconstruction loss into and through the now differentiable continuous SVG parameters. Once such a tokenizer has been trained the authors use it to tokenize MNIST, Fonts and FIGR-8 dataset and train a standard autoregressive transformer using a next token prediction loss to create an auto-regressive generative model for SVGs. The primary baseline for comparison is Im2Vec. Claims And Evidence: Yes. Methods And Evaluation Criteria: - The benchmark datasets seem reasonable. - However in my opinion there are a number of missing baselines. In particular for the reconstruction experiments in Tables 1 and 2 that validate the VQ-VAE I would suggest also including LIVE (https://arxiv.org/abs/2206.04655) a seminal paper in the area of SVG reconstruction of raster images. StarVector: https://arxiv.org/abs/2312.11556 would also be an appropriate baseline for the generation experiments. - Additionally, as a simple baseline that would be very insightful, fine-tuning a standard LLM e.g. Llama on the Fonts and FIGR-8 datasets, which IIUC have the underlying SVG code available, using minimal preprocessing of the SVG code and using the standard Llama tokenizer directly on the SVG code. This will validate or invalidate the need for such a complex tokenization method and the claims in the paper about the need for more rasterized training data and complex pre-processing required in order to train a SVG generation model directly on the SVG code. - Beside all quantitative results I would have also liked to have seen the average SVG codelength of all methods (GRIMOIRE and Im2Vec) for each dataset. When available, the average SVG codelength in the ground truth dataset would also be helpful. It is currently very difficult to understand how fair the comparison between the methods is, without some way to know how much compression each method is doing. I have a suspicion given the number of bezier curves used to represent each, relatively simple, image patch that GRIMOIRE may be relatively uncompressed and verbose. - No measure of significance or error bars are provided on any of the quantitative results. Theoretical Claims: N/A Experimental Designs Or Analyses: Checked. Supplementary Material: N/A none provided. Relation To Broader Scientific Literature: A very nicely written related work section is provided. Essential References Not Discussed: LIVE is definitely an important missing reference: https://arxiv.org/abs/2206.04655. There are also a series of follow-up papers building on top of LIVE. Likewise StarVector: https://arxiv.org/abs/2312.11556 Other Strengths And Weaknesses: Strengths: - I appreciate that the authors will make their code open source. - The use of DiffVG to enable learning a discrete codebook for SVG primitives, and the fact that this enables training the tokenizer on purely rasterized images is very clever and a very nice contribution of the paper. Weaknesses: - The only structural primitive available to the VQ-VAE decoder are Bezier curves. SVG itself supports other primitives e.g. <circle>, <line>, etc. which may be more appropriate, interpretable and compressed representations of certain image patches. But having to make a discrete choice between different primitives would make the decoder non-differentiable meaning that other than for stylization (stroke width, color) all structure in the SVG must be represented as relatively uncompressed, uninterpretable paths. - The need for specialized path extraction methods on a per dataset basis as shown in Figure 3 is a major weakness of the method. Surely a more general patch extraction method would have been possible to employ without a significant degradation in quality? It would have also enabled training a single joint codebook and generative model for all datasets enabling potential transfer learning and greatly enhancing the generality of the method. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your time and thoughtful feedback on our manuscript. Given the time constraints of this rebuttal, we have focused on addressing the major concerns as follows. --- ## **Code Length of the SVG** For Im2Vec, the number of paths and control points per path is fixed at eight and ten, respectively. **Grimoire dynamically adjusts the number of strokes based on the target complexity.** After inspecting over 15,000 generated SVGs across all FIGR-8 classes, we found that **the average number of paths is 95**. **We have uploaded samples and reconstructions for the “user” class for both models to our anonymous repository**. For the reconstructions we included the original ground truth for reference. For single-target reconstructions, Im2Vec tends to overlap strokes around the outline, whereas for more complex targets, the strokes collapse. We encourage the reviewer to directly inspect the SVG code length on the anonymous GitHub Repository: [Link](https://github.com/anon-papercode/15874/tree/main/Showcase). We found this analysis insightful and plan to incorporate it into the results section in the final version of the manuscript. --- ## **Additional Baselines** **Thank you for suggesting finetuning a standard LLM with minimal preprocessing.** This was an insightful suggestion, especially given that the tokenization pipelines of other vector-supervised models (e.g., IconShop) are a considerable limitation. **We have fine-tuned LLaMA 3.2 on the same FIGR-8 subset used for Grimoire with minor preprocessing** and also added comparisons with other vector-supervised models using their respective tokenization pipelines (DeepSVG, IconShop). This analysis has been included in our response to reviewer `sRe3`. --- ## **Missing References** We appreciate all reviewers for highlighting important missing references, such as LIVE. We plan to expand the related work section to incorporate all suggested papers. Specifically, we will: 1. **Before L87:** Introduce vector-supervised methods that predate the LLM era, citing **DeepSVG, Google-Fonts, and DeepVecFont**. 2. **Immediately after:** Among the LLM-based approaches, explicitly mention **StarVector** and **Chat2SVG**. 3. **At the end of the section:** Where we discuss SDS-based methods, include **SVGDreamer** and dedicate a small paragraph to neural implicit representations, citing **NiVel, Text-to-Vector Generation with Neural Path Representation, and NeuralSVG**. While these works differ significantly in methodology, they address similar problems to Grimoire and will be appropriately cited. We hope this revision sufficiently addresses concerns regarding missing references. --- Rebuttal Comment 1.1: Comment: I thank the author's for their response to my review. In particular I appreciate the addition of the LLaMA baseline, the inclusion and results of which I regard as a positive addition to the paper. I also appreciate adding the average SVG code-lengths (in terms of number of paths) which demonstrated, as I feared, that the Grimoire code is relatively complex and verbose. Balancing these two factors, I maintain my original score. The paper is interesting and a useful addition to the literature with some significant disadvantages. --- Reply to Comment 1.1.1: Comment: Thank you for your positive comments and for appreciating our work! We would like to kindly offer an additional clarification that may support your evaluation. **The higher number of path segments observed in the FIGR8 experiments is not an inherent limitation of the method**, but rather **a result of how the vector primitives are designed**. For example, as illustrated in the preliminary results in Figure 9, when we reconstruct the image using a layer-based approach rather than stroke-based primitives, **the number of paths required is significantly reduced**—closely resembling real-world SVG files. **SVG files generated using this layered setting are available in the same folder**, should you be interested in exploring this aspect further. [Link to folder.](https://github.com/anon-papercode/15874/tree/main/Showcase/Emoji) We hope this provides a helpful additional perspective.
Summary: This paper presents GRIMOIRE, a novel text-guided generative model for scalable vector graphics (SVG). The model consists of two main components: a Visual Shape Quantizer (VSQ), which learns to reconstruct raster images as vector shapes through a discrete codebook; and an Auto-Regressive Transformer (ART), which models the joint distribution over shape tokens, positions, and textual descriptions to generate SVGs from natural language prompts. Unlike prior approaches requiring direct supervision from SVG data, GRIMOIRE is trained only with raster image supervision, enabling it to scale to larger datasets. The authors evaluate their method on tasks such as closed-shape reconstruction (MNIST, Emoji) and stroke-based generation (icons, fonts), demonstrating improved flexibility over SVG-supervised methods and competitive generative quality against image-supervised baselines. Claims And Evidence: The proposed architecture builds incrementally upon established techniques in vector image representation and generation. It integrates standard components in a coherent manner. While the claims are plausible and generally supported by qualitative and quantitative evidence, the overall novelty is modest. The improvements are incremental, and while the results are reasonable, they do not clearly establish substantial advancement over existing methods. Methods And Evaluation Criteria: The benchmark datasets used in this work are appropriate, as they align well with those used in prior literature, allowing for meaningful comparisons. However, the methodology would benefit from clearer descriptions of which components are used in each experiment. Since the paper combines several previously established techniques, it is important to explicitly state which variants or components are evaluated in each experiment. Theoretical Claims: There is no theoretical claim in this work. Experimental Designs Or Analyses: ## Vector Quantization for SVG Representation (Tables 1 and 2): An ablation study would greatly strengthen the experimental section. For example, variations in encoder-decoder configurations (e.g., patch size, grid size, codebook size, or SVG command set) could demonstrate the robustness and contribution of the individual components. The current comparisons, such as with Im2Vec, leave some ambiguity about whether the improvements stem from the proposed method or the dataset/model choices. ## SVG Generation (Table 3): The evaluation should include a comparison with IconShop to give a more comprehensive view of generative quality relative to recent state-of-the-art models. Supplementary Material: All parts of the supplementary materials have been reviewed. However, it would be helpful if the authors explicitly stated the goals of each experiment in the supplementary section to improve clarity. Relation To Broader Scientific Literature: This work contributes to the growing field of vector image representation and generation by bridging the raster and vector domains. The approach is particularly promising for multimodal applications, including vision-language models and code generation involving SVG. Given the relevance to downstream applications and the increasing interest in multimodal generation, this research has strong potential impact. Essential References Not Discussed: Several key works are missing that are essential for contextualizing this paper’s contribution: Google-Fonts (ICCV 2019): A foundational model for deep SVG generation combining image auto-encoders with SVG decoders. DeepSVG (NeurIPS 2020): Introduced transformer-based autoencoders for vector graphics; its SVG tokenization remains widely used. LIVE (CVPR 2022): Demonstrates SVG translation from raster images without score distillation loss. DeepVecFont / DeepVecFont v2 (SIGGRAPH Asia 2021, CVPR 2023): Addressed SVG command modeling and differentiable rasterization. These works are highly relevant to both the architecture and training methodology proposed in GRIMOIRE and should be discussed in the paper. Notably, vector image representation has a history that precedes the large language model (LLM) era, contrary to the implication in the related work section. Note that representation for SVG is not starting from large language model era as mentioned in related works Other Strengths And Weaknesses: ## Strengths: Proposes a new framework for SVG generation that avoids the need for direct SVG supervision, opening the door to larger training datasets. Addresses a relatively underexplored area in generative modeling, with practical applications in design, UI generation, and code synthesis. ## Weaknesses: Lacks sufficient citation and discussion of related foundational works. Ablation studies are missing, making it hard to isolate the contributions of each component. Experimental gains over prior work are modest. Other Comments Or Suggestions: ## LLM Baselines: s a non-essential suggestion (outside rebuttal), it would be informative to include SVG generation results from public large language models (e.g., OpenAI’s GPT). Although their SVG generation quality is currently limited, showing this comparison would highlight the advantage of GRIMOIRE. ## Typos: There is a broken image reference at line 322: “Qualitative results in ?? confirm this behaviour on the MNIST dataset.” Questions For Authors: Why use a raster-domain image encoder (e.g., ResNet-18) instead of a vector-domain encoder such as that used in DeepSVG? This choice impacts the learned latent representation, and a comparison would help clarify its effects. How does the VSQ module differ from the VQ-based architecture in Im2Vec? The performance seems similar, so it's unclear how much the modification contributes to overall results. Can the authors include a comparison with IconShop for SVG generation? Given that IconShop is a recent and strong baseline, its inclusion would help contextualize the performance of GRIMOIRE. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript and providing insightful feedback. **We conducted ablation experiments to address your concerns.** We have explored **different patch and grid sizes** on the MNIST dataset, analyzed the **impact of stroke length and width** on FIGR8, and investigated the **effects of varying codebook sizes**. Additionally, we have included **comparisons with three vector-supervised methods**. --- ## Ablation: Patch and Grid Sizes (MNIST) We trained the VSQ module on MNIST with different grid sizes (paper value: 5) and patch sizes (paper value: 128x128). Key findings: - **The patch size variations had minimal impact on model performance.** - **Grid size variations led to improvements for larger number of patches per image**, likely due to the simpler topology of smaller patches. - **In all cases, the reconstruction error remains lower than Im2Vec.** The values in the table below reports the MSE on the test-set. | Patch Size | Tiles = 3 | Tiles = 5 | Tiles = 8 | |------------|-----------|-----------|-----------| | 32 | 0.093 | 0.092 | 0.078 | | 64 | 0.092 | 0.09 | **0.071** | | 128 | 0.09 | 0.094 | 0.078 | --- ## Ablation: Stroke Length (FIGR8) To assess the impact of stroke properties on VSQ performance, we conducted two ablations: 1. **Stroke length variations:** We created patches with smaller or larger strokes. Results show that **shorter strokes yield lower reconstruction errors**, similarly to the grid size variations. 2. **Multiple stroke predictions per patch:** We extended the prediction head of the VSQ to output two strokes per patch instead of one (as in the paper). Results show that more than one segment per shape consistently degrades the reconstruction quality. This suggests that the complexity of strokes in our dataset does not require multiple Bézier curves per patch. | Stroke Length | Segments | Stroke Width | MSE | |----------------|----------|--------------|--------| | 3.0 | 1 | 0.4 | 0.0049 | | 5.0 | 1 | 0.66 | **0.011** | | 8.0 | 1 | 1.06 | 0.023 | | 3.0 | 2 | 0.4 | 0.0052 | | 5.0 | 2 | 0.66 | **0.017** | | 8.0 | 2 | 1.06 | 0.023 | --- ## Ablation: Codebook Size To understand the impact of codebook size $|V|$, we trained the VSQ on FIGR8 using all the sizes proposed in the original Finite Scalar Quantization paper (240, 1000, 4375 [our paper], 15360, and 64000). The results reported below highlight two key observations: - **The reconstruction error decreases significantly up to $|V| = 4375$ but shows only marginal improvements beyond this point.** - **Using excessively large codebooks does not justify the increased computational cost.** | V | MSE | |-------|---------------------| | 240 | 0.0205 | | 1000 | 0.0175 | | 4375 | **0.0145** | | 15360 | 0.0130 | | 64000 | 0.0128 | --- ## Comparison with Vector-Supervised Models **We have added a comparison with three vector-supervised models**, including a publicly available LLM: **LLama 3.2, DeepSVG, and IconShop**. We trained the models on the SVG version of FIGR8. A detailed analysis is in our response to reviewer `sRe3`. --- ## Missing References We appreciate your feedback regarding missing references. **We have addressed this in our response to reviewer `f1Fu` and outlined how we will incorporate these references in the final manuscript.** --- ## Other Questions > It would be helpful if the authors explicitly stated the goals of each experiment in the supplementary section. We agree. **We will add a brief introductory sentence before each subsection of the Appendix.** > Broken image reference at line 322. Thank you for catching this. We will remove the reference, as the corresponding figure is no longer part of the main manuscript. > Why use a raster-domain image encoder instead of a vector-domain encoder? A core novelty of Grimoire is that **the entire framework operates in the raster domain**. Using a vector-domain encoder would contradict this fundamental approach, making the pipeline **no longer vector-free**. > How does the VSQ module differ from the VQ-based architecture in Im2Vec? Im2Vec utilizes an **RNN-based architecture**, which does not belong to the family of **VQ models using discrete embedding codebooks**. Another key methodological difference is that Im2Vec attempts **end-to-end SVG generation**, encoding an entire image and predicting the individual components with an RNN. In contrast, our VSQ first learns vector representations from image patches, then the ART model learns to arrange these patches in the correct sequence. This modular approach significantly **enhances the performance of raster-supervised generative models** compared to Im2Vec.
null
null
null
null
null
null
WMAdapter: Adding WaterMark Control to Latent Diffusion Models
Accept (poster)
Summary: This paper introduces WMAdapter, a plug-and-play watermarking solution for latent diffusion models that embeds watermarks during the image generation process without modifying the original diffusion components. The authors propose two key innovations: (1) a contextual adapter that conditions on the content of the cover image to generate adaptive watermark embeddings, and (2) a hybrid finetuning strategy that preserves the integrity of diffusion components while suppressing artifacts. Claims And Evidence: The claim of superior image quality is backed by quantitative metrics (PSNR, FID) and qualitative visual comparisons showing fewer artifacts than other methods. The comparison with AquaLoRA seems to focus on visual quality differences without acknowledging that AquaLoRA was designed for more stringent white-box attack scenarios. Methods And Evaluation Criteria: The watermarking architecture (using contextual adapters attached to VAE decoder) is well-motivated and explained clearly. The hybrid finetuning strategy addresses a real issue (artifacts) observed in preliminary experiments. However, the evaluation has some notable limitations: - The paper acknowledges the quality-robustness trade-off but appears to prioritize visual quality over robustness in their design choices. This is a legitimate approach, but it would benefit from more explicit discussion, as robustness is generally considered more critical for watermarking applications. - The paper lacks some important adaptive evaluations. In particular, one important recent works are not addressed: "Leveraging Optimization for Adaptive Attacks on Image Watermarks", These studies present important challenges to watermarking methods that should be evaluated. Theoretical Claims: The paper is primarily empirical rather than theoretical. Experimental Designs Or Analyses: The experimental designs are generally sound. Supplementary Material: Yes. Relation To Broader Scientific Literature: The paper effectively positions WMAdapter within the broader context of: 1) Traditional post-hoc watermarking methods (frequency domain, optimization-based, encoder-decoder) and 2) Diffusion-native watermarking approaches (initial noise-based vs. latent space-based). Essential References Not Discussed: Leveraging Optimization for Adaptive Attacks on Image Watermarks Other Strengths And Weaknesses: Strengths: * The paper addresses a clear practical need for high-quality watermarking in diffusion models * The solution is elegant, lightweight (1.3MB) and computationally efficient (30ms per image) * The hybrid finetuning strategy is novel and effective at preserving image quality * The approach is practical and implementable, with clear explanations of design choices Weaknesses: * The work prioritizes visual quality over robustness, which may limit its practical application in scenarios where security is paramount * The evaluation lacks assessment against rotation attacks, which are common in real-world scenarios * The paper does not evaluate against state-of-the-art adaptive removal methods using optimization or generative models * There is inconsistency between the methods shown in qualitative and quantitative evaluations (Figure 1) * Some attacks demonstrated in Figure 8 are not thoroughly quantified in the main results (Table 2) Other Comments Or Suggestions: * Why is there a discrepancy between the methods shown in the qualitative demonstration (left side of Figure 1) and the quantitative evaluation (right side of Figure 1)? * Figure 8 demonstrates robustness against many additional distortions, but Table 2 only presents a limited subset. Why weren't all these distortions quantitatively evaluated in the main results? * The paper lacks evaluation of rotation robustness, which is a critical transformation for watermark assessment. How does WMAdapter perform against rotation attacks, and why was this evaluation omitted? * Recent works such as "Leveraging Optimization for Adaptive Attacks on Image Watermarks". How does WMAdapter resist these targeted attacks? * The paper appears to prioritize visual quality over robustness. Given that robustness is generally considered more critical for watermarking applications, can you discuss this design choice and the explicit trade-offs made? * How does the approach scale to larger or smaller watermark sizes beyond the 48-bit watermark used in experiments? Would it require retraining the entire system or just the adapter? * Would the watermark still be recoverable after significant editing? * Some minor typos, like in L400, "StbaleSignature" instead of "StableSignature" Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q: Discrepancy between qualitative and quantitative evaluation in Figure 1.** Different types of watermarks introduce different types of artifacts. Although FID is one of the most widely used quantitative metrics for image quality, it often struggles to accurately reflect the visual impact of diverse artifacts. Bridging the gap between quantitative metrics and human perception remains an open research problem and an active area of investigation. &nbsp; **Q: Why weren't all distortions in Figure 8 quantitatively presented in Table 2** Due to space limitations, we had to make a trade-off in presentation. In Table 2, our goal was to provide a balanced overview of both image quality and robustness metrics. Including all attack types and quality metrics in a single table would have been impractical. This design choice is similar to that of Table 1 in *Stable Signature*. To complement Table 2, we included Figure 8, which provides a more comprehensive evaluation by illustrating robustness under a wider range of attack types and intensities. &nbsp; **Q: How does WMAdapter perform against rotation attacks?** Please refer to this anonymous [link](https://ibb.co/FjJWm3c) for our evaluation of rotation robustness. It is important to note that WMAdapter is built upon the public checkpoint provided by *Stable Signature*, which was not pretrained with rotation augmentation. Despite this, WMAdapter demonstrates robustness to moderate rotation (up to 15 degrees) and performs comparably to *Stable Signature*. To further enhance robustness under stronger rotation, WMAdapter would need to be built upon a watermark decoder that is pretrained with rotation-augmented data. &nbsp; **Q: “The paper does not evaluate against state-of-the-art adaptive removal methods using optimization or generative models”** We would like to clarify that our paper **does evaluate against SOTA opensource** adaptive removal methods, including both optimization-based and generative approaches. Specifically, for **optimization-based methods**, we include evaluations against the adversarial attack proposed in [An’24] under both black-box and white-box settings. For **generative models**, we assess performance against VAE-based methods ([Cheng’20], [Balle’18]) as well as a diffusion-based method ([Zhao’23]). Please refer to Sec. 4.3 and Fig. 5 for detailed results. &nbsp; **Q: Lack evaluation against "Leveraging Optimization for Adaptive Attacks on Image Watermarks”** As the code for the referenced paper is **not publicly available**, we have made our best effort to reproduce the adversarial noising method described in the paper. Specifically, we implemented their approach using the reported hyperparameters. We found that the suggested $\epsilon$-ball of 2/255 produced negligible attack effects. We increased the $\epsilon$-ball to 8/255, reducing PSNR from 34.8 to 30.3 (similar drop to other attacks in our Fig.5), while the bit accuracy dropped moderately from 0.98 to 0.93. This suggests that our method demonstrates resilience to such attacks. We'll include the result in updated version. &nbsp; **Q: Discuss the design choice and the explicit trade-offs made between visual quality and robustness”** Our goal is to develop a high-quality, artifact-free watermarking technique. To this end, we propose a **non-intrusive watermarking framework** that avoids modifying the pretrained diffusion pipeline. So we design **Contextual Adapter** as a plugin avoiding direct modification of diffusion models. Also we develop **Hybrid Finetuning** to conduct non-intrusive training. The relative importance of robustness versus visual quality is indeed a nuanced and context-dependent question. From the perspective of watermarking as a standalone technique, **robustness** is often considered the primary metric. However, when we consider broader deployment scenarios—such as integrating watermarking into GenAI products, which is a key motivation of our work—the priorities shift. In these contexts, **visual quality becomes paramount, as it directly impacts user experience and product adoption**. There is a strong practical demand for artifact-free watermarking that preserves the high visual fidelity expected from GenAI products. Therefore, our design values visual quality without compromising essential robustness, striking a balance suitable for real-world applications. &nbsp; **Q: “How does the approach scale to larger or smaller watermark sizes beyond the 48-bit watermark?”** Our approach is built on a pretrained watermark decoder. To scale to different watermark sizes, one can simply replace the decoder with a pretrained version of the desired size and retrain only the adapter—the rest of the system remains unchanged. Alternatively, users may train a watermark decoder of arbitrary size themselves. &nbsp; **Q: Robustness to significant editing** Thanks for your suggestion. We randomly edit 80% of the image by inpainting, the bit accuracy remains 0.91. --- Rebuttal Comment 1.1: Comment: Thanks for the author's feedback. My concerns are mainly addressed. I tend to be positive!
Summary: This paper introduces WMAdapter, a watermarking plugin for AI-generated images that seamlessly embeds user-specified watermark information during the diffusion generation process. Unlike previous methods that modify diffusion modules to embed watermarks, WMAdapter preserves the integrity of diffusion components, resulting in sharp images with no noticeable artifacts. The contributions of this paper include a contextual adapter for generating adaptive watermark embeddings based on the image content and a hybrid finetuning strategy to suppress artifacts while preserving the integrity of the diffusion process. Experimental results show that WMAdapter provides superior image quality, flexibility, and competitive watermark robustness. Claims And Evidence: The authors claim that WMAdapter is a novel diffusion watermarking solution with an innovative design philosophy, which is appropriate. They also claim that WMAdapter outperforms previous post-hoc and diffusion-native watermarking methods, and the experimental results support this claim. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the diffusion model watermarking. Theoretical Claims: This paper does not provide any theoretical claims. Experimental Designs Or Analyses: The experimental and analyses of this paper is sound and valid. It compared some important model watermarking methods, such as Stable Signature. Supplementary Material: None Relation To Broader Scientific Literature: None Essential References Not Discussed: [1] Yang Z, Zeng K, Chen K, et al. Gaussian shading: Provable performance-lossless image watermarking for diffusion models. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 12162-12171. [2] Xiong C, Qin C, Feng G, et al. Flexible and secure watermarking for latent diffusion model. Proceedings of the 31st ACM International Conference on Multimedia. 2023: 1668-1676. Other Strengths And Weaknesses: **Strength** (1) The paper proposes a watermarking method for image generative models that demonstrates robustness against various types of attacks, outperforming existing approaches. (2) It explores a hybrid training strategy involving the VAE decoder, which effectively mitigates the degradation of image quality caused by watermark embedding. (3) Comprehensive ablation studies are conducted to evaluate the contribution of the adapter and hybrid training strategy, showing (i) improved watermark decoding accuracy and (ii) no compromise in the performance of the generative model. **Weakness** (1) It is recommended to include a comparison of watermark effects and potential artifacts under both hybrid training and standalone training modes. (2) The motivation lacks clarity — how do the observations from existing methods logically lead to the need for fine-tuning the VAE decoder and the inclusion of associated modules? Other Comments Or Suggestions: Please refer to the weakness. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q: Essential References Not Discussed: Gaussian Shading and FSW** Thank you for the suggestion. In fact, we have already referenced and discussed both works in the Introduction section. Specifically, they are cited as [Yang et al., 2024] and [Xiong et al., 2023], corresponding to Gaussian Shading and FSW, respectively. &nbsp; **Q: “It is recommended to include a comparison of watermark effects and potential artifacts under both hybrid training and standalone training modes.”** Thank you for the suggestion. We kindly refer the reviewer to Figure 6, which provides a visual comparison of watermark effects and potential artifacts under different training modes. We will include more examples in the revised manuscript to improve clarity. &nbsp; **Q: “The motivation lacks clarity — how do the observations from existing methods logically lead to the need for fine-tuning the VAE decoder and the inclusion of associated modules?”** We would like to further clarify our motivation. Existing methods typically watermark diffusion models by modifying the backbone or VAE decoder parameters, which often results in visible artifacts and degraded visual quality. To address this, we propose a **non-intrusive framework** that preserves all original diffusion parameters. Specifically, we design a **Contextual Adapter** that is attached to the VAE decoder, avoiding direct modification of its internal structure. In addition, we introduce a **Hybrid Finetuning Strategy** to train the Contextual Adapter in a similarly non-intrusive fashion. If there are any remaining questions regarding our motivation, we would be happy to provide further clarification.
Summary: This paper proposes the **WMAdapter**, which generates content-aware watermark embeddings using the contextual adapter and embeds watermarks with a hybrid fine-tuning strategy. Specifically, the contextual adapter comprises a series of fuser modules, each of which is attached before a corresponding VAE decoder block. In addition to training the adapter, **WMAdapter** adopts a hybrid fine-tuning strategy that jointly optimizes the adapter and VAE decoder while using the original VAE decoder for inference. Experimental results show that the **WMAdapter** retains effective watermark-embedding capability while maintaining generation quality. ### Update after rebuttal I've no further questions regarding this paper after rebuttal and keep my original score. Claims And Evidence: The claims are well supported. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense. Below, I list one concern about hybrid fine-tuning. 1. I'm confused about the design of hybrid fine-tuning, and my concern is explained as follows. The better generation performance is achieved at the cost of detection capability. When jointly fine-tuning the adapter $\mathcal{A}$ and the VAE decoder $\mathcal{V}^{\textrm{new}}$, after several updating steps we have the loss $\mathcal{L}(\mathcal{A}(w), \mathcal{V}^{\textrm{new}}(f))$ which should be lower than $\mathcal{L}(\mathcal{A}(w), \mathcal{V}^{\textrm{ori}}(f))$, where $\mathcal{V}^{\textrm{ori}}$ indicates the original VAE decoder. This aligns with the experimental results where Adapter-I compromises parts of the watermarking capability to enhance the image quality. If so, my concern is whether it is possible to select a set of better $\lambda$ to achieve this without hybrid fine-tuning or find better checkpoints during the training process. Theoretical Claims: No theoretical proofs to check. Experimental Designs Or Analyses: The experiments in this paper seem sufficient. Supplementary Material: I've reviewed all the supplementary material. Relation To Broader Scientific Literature: WMAdapter can improve traceability within LDMs and reduce the misuse of AIGC contents [1]. [1] Security and Privacy on Generative Data in AIGC: A Survey Essential References Not Discussed: No. Other Strengths And Weaknesses: The overall paper is well-written, and the method design is well supported by the motivation and experiments. Other Comments Or Suggestions: Please see above. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q: Is it possible to select a better $\lambda$ or or find better checkpoints to enhance image quality?** Thank you for your insightful question. In practice, selecting a better $\lambda$ or checkpoint to enhance image quality proves very challenging. During our joint training experiments (Adapter-V), we extensively explored different $\lambda$ values and checkpoints. However, none could achieve a better trade-off between image quality and robustness compared to Hybrid Finetuning. Specifically, increasing the weight of the image quality loss significantly deteriorated bit accuracy, sometimes even leading to training collapse. Moreover, merely raising the quality loss weight does not effectively suppress the lens flare artifacts shown in Figure 6; it only slightly improves the quantitative metrics. We believe this issue arises not only from the choice of loss weights or checkpoints but also from modifications to the VAE decoder itself. Joint finetuning significantly differs from the large-scale pretraining originally conducted for the VAE. It adjusts full VAE decoder parameters based on much smaller datasets with distinct distributions, batch sizes, and training objectives (bit acc + quality). Consequently, this easily disrupts the inherent knowledge encoded in the pretrained VAE decoder, leading to reduced image quality or the emergence of artifacts. In other words, adjusting the loss weight or selecting different checkpoints during joint finetuning cannot effectively resolve the substantial deviations from the pretrained parameter landscape. The most reliable solution remains using the original pretrained VAE decoder directly, as employed in our proposed Hybrid Finetuning method. Regarding the loss, lower values of $\mathcal{L}(\mathcal{A}(w), \mathcal{V}^{new})$ achieved during joint fine-tuning do not necessarily indicate genuine improvements, as the VAE decoder $\mathcal{V}^{new}$ tends to overfit on the limited training data. &nbsp; **Q: Relation To Broader Scientific Literature** Thank you for highlighting this point. We will clarify and discuss the relation of our work to the broader scientific literature in the revised manuscript.
null
null
null
null
null
null
null
null
Scalable Model Merging with Progressive Layer-wise Distillation
Accept (poster)
Summary: The paper introduces ​ProDistill, a ​progressive layer-wise distillation​ algorithm for merging multiple fine-tuned models into a single high-performing model. It theoretically demonstrates the necessity of ​task-specific data​ for effective merging and proposes a layer-by-layer distillation approach that minimizes activation distance between teacher (fine-tuned) and student (merged) models, reducing memory overhead. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: no Relation To Broader Scientific Literature: The key contributions of the paper are closely tied to and build upon ​broader scientific literature​ in model merging, knowledge distillation, and multi-task learning Essential References Not Discussed: No Other Strengths And Weaknesses: It introduces a new method called ​ProDistill, which uses ​layer-wise distillation​ to merge models, addressing the performance drop issue in existing methods. The theoretical analysis on the necessity of task-specific data is convincing. It works well with ​limited data​ (e.g., 1-shot) and scales to ​10+ billion parameters, making it suitable for real-world applications. Experiments are limited to vision and NLP tasks, with no validation in other domains like speech or reinforcement learning. Other Comments Or Suggestions: No Questions For Authors: The experiments focus on vision and NLP tasks. Have you tested ProDistill on other domains, such as speech or reinforcement learning? If not, do you believe the method would generalize well to these areas? Will you release the code and implementation details for ProDistill? If not, what are the reasons? While layer-wise distillation reduces memory overhead, are there any trade-offs in terms of performance or training stability compared to end-to-end distillation? Ethics Expertise Needed: ['Other expertise'] Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewer for their valuable feedback. The provided suggestions are extremely helpful and constructive, and we will revise the paper accordingly. We address the reviewer's questions as follows. >Q1: The experiments focus on vision and NLP tasks. Have you tested ProDistill on other domains, such as speech or reinforcement learning? If not, do you believe the method would generalize well to these areas? **A1:** Thank you for the great question. Most existing works on model merging primarily evaluate their methods on vision and NLP tasks, with limited exploration in other domains. Actually, we **can not even find a commonly used model merging benchmark for speech or reinforcement learning in our pretrain-finetune paradigm**. Constructing such a benchmark would require significant effort in dataset selection, model training, and evaluation setup, which is unfortunately beyond the scope of the limited time available for the rebuttal. However, we believe ProDistill can generalize well across different domains. The key reason is that **ProDistill is inherently domain-agnostic**—it does not leverage specific properties of vision or language data for its effectiveness. The only requirement is a layer-wise network structure, which is a common standard in modern deep learning. Besides, we would like to highlight that the LLMs used in our experiments, such as WizardMath-13B, are actually **trained using RLHF-based algorithms**. This can partly give evidence that ProDistill applies to RL trained models. >Q2: Will you release the code and implementation details for ProDistill? **A2:** Yes, the code and implementation details are already provided in the zip file of the Supplementary Material part on the openreview console. >Q3: While layer-wise distillation reduces memory overhead, are there any trade-offs in terms of performance or training stability compared to end-to-end distillation? **A3:** Thank you for the question. We compare ProDistill with its end-to-end distillation counterpart, **DistillMerge, in Appendix C.2**. In our experiments, we do not observe degradation in performance or training stability when using layer-wise distillation. Finally, we thank the reviewer once again for the effort in providing us with valuable and helpful suggestions. We will continue to provide clarifications if the reviewer has any further questions.
Summary: The paper presents a new method for model merging based on progressive feature alignment. It proposes to learn merging coefficients by progressively aligning the representation of the merged model and the constituent models (the finetuned ones) layer by layer. This reduces the computational requirements of the end-to-end implementation. Claims And Evidence: All empirical claims are supported by convincing experiments. See below for the theoretical part. Methods And Evaluation Criteria: Method and evaluation setting are sound. The benchmark used are standard in the literature. Theoretical Claims: The paper builds its motivation based on Theorem 3.1 and 3.2. These are the two worst-case analyses that the authors use in support of the motivation of their method, which instead is data-driven. Honesty, considering the success of data-agnostic methods and their empirical performance, I don’t think that these results are particularly motivating. This contribution seems irrelevant to the paper and the method itself. Experimental Designs Or Analyses: The experimental analysis is well conducted. Missing comparisons in table 1,2 for data-agnostic methods is TIES merging [1], while for the data-driven approaches, it is important to include also MaTS [2]. In general, performances are impressive. [1] Yadav, Prateek, Derek Tam, Leshem Choshen, Colin A. Raffel, and Mohit Bansal. "Ties-merging: Resolving interference when merging models." NeurIPS 2023 [2] Derek Tam, Mohit Bansal, Colin Raffel “Merging by Matching Models in Task Parameter Subspaces” TMLR 2024 Supplementary Material: yes, I checked for further ablations and experiments. In particular, I looked for the performance gap between the proposed ProDistill and the naive DistillMerge. I would have expected that DistillMerge would be an upper bound on the performance of DistillMerge. Could you elaborate more on why it is not so? See Fig.5. Also in the right figure (Roberta exp) caption is DirectDistill. If this is not a typo, the caption should be updated. Could you please give more details on DirectDstill? Am I understanding correctly that this baseline is just distillation where you fine-tune the model parameters of the merged model, instead of the lambda coefficients? Could you elaborate on these ablations? Honestly, I think that these ablations and the results? are much more insightful than the theoretical motivation. I suggest expanding them and finding space in the main paper. Relation To Broader Scientific Literature: I think the papers connect to the right literature on model merging, both data-driven and data-agnostic, building on top of insights from other papers that show that representation alignment is crucial for model merging. Essential References Not Discussed: see above. Other Strengths And Weaknesses: The main strength is that this is an actual scalable method. Main weakness is that the theoretical motivation for this method seems irrelevant, considering the empirical performance of data-agnostic model merging methods, but I want to hear a better explanation from the authors. Other Comments Or Suggestions: see above Questions For Authors: Could you give more details on the "dual inputs", I don't think I understood if this is something that is particular to your method or it just identifies the input of the merged and the fine-tuned models. The way is introduced is confusing as it seems a technical contribution, but to me this is just an implementation detail of your method. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to express our gratitude for the reviewer's helpful and positive comments. The suggestions provided have been instrumental in refining our work, and we will incorporate the necessary revisions accordingly. Below, we address each of the reviewer’s questions in detail. >Q1: Honesty, considering the success of data-agnostic methods and their empirical performance, I don’t think that these results are particularly motivating. ... Honestly, I think that these ablations and the results are much more insightful than the theoretical motivation. I suggest expanding them and finding space in the main paper. **A1:** Thank you for your thoughtful suggestion. We acknowledge that our theoretical results focus on worst-case analysis and may not fully capture real-world practice. However, we find these results delicate and interesting, as they highlight aspects that are not widely recognized in the community, which is why we included them in the main text. We agree the ablations in the appendix convey much information that we would like to share with the readers. We will restructure the paper to incorporate these results into the main text in future revisions. >Q2: Missing comparisons in table 1,2 for data-agnostic methods is TIES merging [1], while for the data-driven approaches, it is important to include also MaTS [2]. **A2:** Thanks for the suggestions. We **add additional experiments to compare with TIES and MaTS**. For TIES, we use the same hyperparameter grid as in our LLM experiments. For MaTS, we use the RegMean Objective and RegMean Initialization, and choose the same hyperparameter grid as that of RegMean in our paper. The results are given below. Our method ProDistill outperforms the new baselines. An interesting finding is that TIES merging has a bad performance on NLP tasks, which coincides with previous findings in [1]. | | SUN397 |Cars |RESISC45| EuroSAT |SVHN |GTSRB |MNIST |DTD | Avg | |----------|----------|----------|----------|----------|----------|----------|----------|----------|----------| | TIES Merging | 65.05 | 64.43 | 74.87 | 77.33 | 81.28 | 69.37 | 96.53 |54.52 |72.92 | | MaTS | 67.65 | 67.23 | 81.76 | 93.30 | 86.76 | 80.08 | 97.39 | 60.21 | 79.30 | | ProDistill(Ours) | 68.90 |71.21 |89.89 |99.37 |96.13 |95.29 |99.46 |68.03 |86.04| | | CoLA| SST-2| MRPC |STS-B |QQP| MNLI| QNLI| RTE| Avg| |----------|----------|----------|----------|----------|----------|----------|----------|----------|----------| | TIES Merging | 0.2050 | 0.8853 | 0.5427 | -0.2247 | 0.8282 | 0.7786 | 0.8142 | 0.6462 |0.5594 | | MaTS | 0.3022 | 0.9243 | 0.8205| 0.5187 | 0.8178 | 0.7095 | 0.8506| 0.6498| 0.6992| | ProDistill(Ours) | 0.4442| 0.9312 |0.8464 |0.6942| 0.8134| 0.7857| 0.8900| 0.7076|0.7641 | >Q3: I would have expected that DistillMerge would be an upper bound on the performance of ProDistill. Could you elaborate more on why it is not so? See Fig.5. **A3:** Thanks for pointing this out. We also find this result very intriguing. We hypothesize that ProDistill decomposes the overall training objective of DistillMerge, into more fine-grained layer-wise objectives. This decomposition makes it easier to minimize the objectives sequentially rather than optimizing the overall objective in one step. >Q4: Also in the right figure (Roberta exp) caption is DirectDistill. If this is not a typo, the caption should be updated. **A4:** Thanks for pointing this out. We will correct the typo in the revision. >Q5: Could you please give more details on DirectDstill? Am I understanding correctly that this baseline is just distillation where you fine-tune the model parameters of the merged model, instead of the lambda coefficients? **A5:** Yes exactly! The DirectDistill baseline uses $\ell_2$ distillation loss to fine-tune the original model parameters, instead of lambda coefficients. >Q6: Could you give more details on the "dual inputs", I don't think I understood if this is something that is particular to your method or it just identifies the input of the merged and the fine-tuned models. The way is introduced is confusing as it seems a technical contribution, but to me this is just an implementation detail of your method. **A6:** Thanks for the question. The proposed "dual inputs" is an important implementation detail in our method, which has a big impact in the final accuracy (see Appendix C.4). Additionally, this design choice is often overlooked in activation-matching algorithms, and we highlight it to ensure awareness within the community. Once again, we appreciate the reviewer's feedback and hope that our responses clarify the questions. We remain committed to improving the quality of our paper and welcome any further feedback. **References** [1] He Y, Hu Y, Lin Y, et al. Localize-and-stitch: Efficient model merging via sparse task arithmetic
Summary: Model merging is an emerging paradigm that combines multiple models into **a single, versatile model, eliminating the need for extensive retraining and substantial weight storage**. However, it is commonly observed that the performance of merged models degrades as the number of models increases. To mitigate this, existing research has proposed using **a small amount of validation data** from downstream tasks to improve performance. This paper starts with a theoretical analysis of data-free merging algorithms, showing that their performance is arbitrary and often suboptimal. And this analysis highlights that a small amount of few-shot data is **essential** to achieve meaningful, non-trivial performance. Inspired by these insights, the authors reframe model merging as **a distillation process**. They propose aligning the activations of each layer individually to boost the merged model's performance. This approach offers multiple efficiency benefits, including **reduced data, computation, and memory requirements**. Empirically, the authors validate their framework across three distinct experimental setups, successfully demonstrating its effectiveness from various perspectives. Claims And Evidence: 1. The claims in this paper are clear and well-written, and the experimental results appear **convincing**. The proposed method offers **numerous efficiency benefits**—such as reduced data, computation, and memory usage—further demonstrating its practical applicability. 2. While the merging framework presented in this paper is promising, it consistently assumes that the models being merged share the same base architecture. What happens if the fine-tuned models were trained **under different configurations**, such as varying architectures, hyperparameters (e.g., batch size, learning rate), or training durations (e.g., total epochs)? **Misalignment prior to merging** could negatively affect performance, and this issue cannot be overlooked. Can the authors justify how your method handles alignment before merging models in such scenarios? 3. Furthermore, how does the proposed ProDistill method compare to **multi-task learning (MTL)** when both are trained on a validation dataset? What are the performance gap between these approaches? Methods And Evaluation Criteria: The benchmark datasets and experimental setup used in the paper are commonly used in the model merging field. The evaluation makes sense to me and the analysis of efficiency from multiple perspectives looks solid to me. Theoretical Claims: I have checked their proof in Appendix A. Both of them look correct to me. Experimental Designs Or Analyses: Yes. Their experimental design is valid to demonstrate their method's effectiveness. Supplementary Material: Yes. I have walked through their appendix. I appreciate the illustrations in Section C.3, which effectively clarify the relationships between algorithms for readers. Relation To Broader Scientific Literature: The primary contribution of this work compared to other model merging techniques lies in **reframing the merging process as a distillation task**, leveraging a few-shot validation dataset, and progressively optimizing coefficients across model layers **in an efficient manner**. Essential References Not Discussed: There is no major reference to be discussed. However, it is worth noting that existing literature, such as the work on model alignment prior to merging (e.g., https://arxiv.org/abs/2209.04836), has explored layer-wise alignment techniques. Other Strengths And Weaknesses: This paper is easy to read, and their experimental results and framework efficiency look very promising. Other Comments Or Suggestions: In Equation 2, the method minimizes the feature distance between layer embeddings, which assumes that the embedding sizes of the models being merged are **identical**. How can this approach be extended to accommodate models with different embedding sizes, and what strategies could mitigate this limitation? Questions For Authors: How do the coefficients for each layer and downstream task evolve across training steps? Are there insights from the results that could enable us to predict these coefficients **without performing the full training process**? Additionally, how do these coefficients relate to the correlation or similarity between downstream tasks? And more, is it necessary to compute coefficients for all the layers, can we reuse several close layers' coefficients without computing them? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s valuable feedback. **We follow the reviewer's advice and conduct additional experiments, with the results provided at https://anonymous.4open.science/r/Experiments-for-Reviewer-CuV6-8476.** We address the reviewer's speficic questions as follows. >Q1: What happens if the fine-tuned models were trained under different configurations, such as varying architectures? ... How can this approach be extended to accommodate models with different embedding sizes? **A1:** Thank you for this insightful question. Our method can accommodate training differences such as varying hyperparameters; however, it is not directly applicable to merging models with different architectures, as activation matching strategies may fail. The broader challenge of merging models with differing architectures, which we highlight in the Related Works section, is a valuable research direction but is beyond the scope of this paper. Notably, most existing methods based on task vectors struggle with such settings, as weight averaging fails when architectures differ. >Q2: Can the authors justify how your method handles alignment before merging models in such scenarios? **A2:** Our method actually aligns the representation of each layer, which effectively addresses misalignment in the considered fine-tuning setup. In cases of severe misalignment, e.g. merging independently trained models, our method can be applied **in parallel to existing alignment algorithms**. Specifically, one can first apply an off-the-shelf alignment algorithm like Git-rebasin [1], and then apply our ProDistill to complete the merging process. >Q3: How does the proposed ProDistill method compare to multi-task learning (MTL) when both are trained on a validation dataset? What are the performance gap between these approaches? **A3:** We have already provided such an ablation study in **Appendix C.3**, where we study an algorithm termed as **DirectTrain**, which conducts supervised multi-task training on the few-shot validation datasets. Our method significantly outperforms DirectTrain algorithm, as indicated in Figure 6. >Q4: It is worth noting that existing literature, such as the work on model alignment prior to merging, has explored layer-wise alignment techniques. **A4:** Good point! The Git-rebasin paper [1] also explores a layer-wise activation matching strategy. The major difference is that 1. they use activation matching to learn a **permutation matrix** that aligns the neurons 2. we use activation matching to learn the **merging coefficients**, which directly determine the final merged model. we have cited [1] and will make further discussions in the revision. >Q5: How do the coefficients for each layer and downstream task evolve across training steps? **A5:** Thank you for this insightful question. We conduct additional experiments to track the change of merging coefficients . The results are provided in **Figure 1~3 in the link.** We make several observations: 1. The mean of merging coefficients remains stable around its initialization 0.3. 2. The standard deviation of merging coefficicents keep increasing during training. 3. The cosine similarities between coefficients keep decreasing during training. These results indicate that ProDistill captures the **fine-grained (Obs 2) and task-specific (Obs3) variation within each module**, rather than merely adjust the overall scaling (Obs 1). Therefore, unlike previous approaches that use scalar coefficients, our element-wise coefficients carry richer information and are unlikely to be easily predictable from data or model alone. >Q6: How do these coefficients relate to the correlation or similarity between downstream tasks? **A6:** We provide additional experiments measuring the cosine similarity between merging coefficients across different tasks. The results, provided in **Figure 4 in the link**, reveal that the coefficients of different datasets have uniformly low cosine similarity of about 0.15. However, some correlation exists; for example, the coefficients for MNIST and SVHN (both digit classification tasks) have a slightly higher cosine similarity of 0.2. >Q7: Is it necessary to compute coefficients for all the layers, can we reuse several close layers' coefficients without computing them? **A7:** Thanks for the suggestions. We follow the advice and use vanilla task arithmetic for layers where task vectors have smallest $\ell_2$ norm. The results are given in **Table 1 in the link**. It shows that skipping closer layers degrades performance. However, even when skipping 6 out of the total 12 layers, the average accuracy is still above 80%. Once again, we sincerely thank the reviewer for their constructive feedback, and we are eager to engage in further discussions to clarify any concerns. **References** [1] Ainsworth S K, Hayase J, Srinivasa S. Git re-basin: Merging models modulo permutation symmetries[J]. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have carefully reviewed your explanations. 1. Regarding Table 1 in your attachment, I find it particularly interesting to see how the trade-offs play out when combined with Figure 1. The approach of reusing coefficients for merging to reduce computation time is compelling. Identifying layers with similar coefficients and reusing them could be an effective optimization strategy. 2. It’s also intriguing to explore how this work handles different bases for model merging and the combination of models in various structures. I look forward to seeing further developments in your future work. Thank you for your time and response.
Summary: The paper introduces ProDistill, a model merging algorithm leveraging progressive layer-wise distillation. A key contribution is the use of merging coefficients that are the same size as the model weights, enabling a fine-grained control of the merging process through element-wise operations. ProDistill efficiently computes these coefficients by separately optimizing each layer, reducing memory requirements compared to traditional end-to-end methods. The authors show that this progressive layer-wise approach significantly improves merging performance, achieving state-of-the-art results. ## update after rebuttal While the authors clearly addressed my question, I still find that the method lacks a compelling justification—especially given that it requires more computation and data, yet does not consistently outperform Task Arithmetic (while I acknowledge that other baselines also show similar limitations). Even after considering the rebuttal, this key concern remains unresolved. Therefore, I do not think the paper is strong enough for an accept at the level of a 4, and I will maintain my original score. (I’d like to note that I am not an expert in this domain, so I defer to the AC and other reviewers with more experience in this area when making the final decision.) Claims And Evidence: While the results of ProDistill are consistently strong on vision and small-scale NLP tasks, the performance gains are less clear and more nuanced in large language model (LLM) tasks, which may indeed be the most significant scenario. For instance, in Tables 3, 9, and 10, the simplest baseline—Task Arithmetic, which does not require a few-shot validation dataset or additional training—achieves better results than ProDistill on certain tasks. Why does ProDistill not show a clear performance advantage over Task Arithmetic in these cases? Is it due to insufficient few-shot data (≤64 samples)? If so, how many validation samples would be necessary to clearly outperform this baseline? Additionally, regarding Figure 4 (left), which illustrates a clear positive relationship between validation set size and performance in vision tasks, can we expect similar trends to hold consistently for LLM tasks? Specifically, how do we explain or interpret the anomaly in Table 9, where 32-shot performance is higher than 64-shot performance? Methods And Evaluation Criteria: The proposed methods and evaluation criteria generally make sense and are well-justified for the problem at hand, except for the evaluation of Large Language Model (LLM) tasks, as previously discussed in the "Claims and Evidence" section. Theoretical Claims: There are no notable issues or concerns to highlight. Experimental Designs Or Analyses: The experiments utilize well-defined and widely recognized benchmark tasks and datasets. Supplementary Material: While the authors have provided their implementation code, I have not separately executed or verified the code myself. Relation To Broader Scientific Literature: This work can directly inform future research on model merging, particularly in developing scalable methods under conditions with moderate unlabeled data availability. Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: It would be helpful if the authors explained why a different set of baselines was chosen specifically for the LLM tasks compared to the other experiments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's comments and valuable suggestions. **We conduct additional experiments to clarify the reviewer's question, with the results given in https://anonymous.4open.science/r/Experiments-for-Reviewer-ZoNc-9701.** We address the reviewer's questions in more detail as follows: >Q1: While the results of ProDistill are consistently strong on vision and small-scale NLP tasks, the performance gains are less clear and more nuanced in large language model (LLM) tasks...Task Arithmetic, which does not require a few-shot validation dataset or additional training—achieves better results than ProDistill on certain tasks. Why does ProDistill not show a clear performance advantage over Task Arithmetic in these cases? **A1:** This is a very good question. Task Arithmetic indeed has a good performance in our LLM experiments, which we summarize is due to the following reasons: 1. Task Arithmetic is more effective for **larger models**. In our vision experiments, its gap to fine-tuned models is over 20% for small ViT-B-32 models but narrows to about 10% for larger ViT-L-14 models. This trend aligns with findings from [1], which suggest that larger models exhibit stronger kernel behavior during fine-tuning and are closer to linear models (see Section D.1 in [1]). 2. Task Arithmetic works better for merging **a small number of models**. As shown in [2], increasing the number of models leads to greater task interference, reducing the effectiveness of methods that do not explicitly address interference. Our LLM experiments involve merging only two models, whereas our vision and NLP experiments merge up to eight models. 3. Task Arithmetic can have a **less balanced accuracy across tasks**, which gives a false sense of high performance. For example, when merging WizardMath-13B and Llama-2-13B-Code-Alpaca models (results given in Table 3 in our manuscript), task arithmetic shows slightly better performance than ProDistill on math related benchmarks (0.6467 vs 0.6279 on GSM8k), but significantly worse performance on code related benchmarks (0.0840 vs 0.2239 on MBPP). In other words, although Task Arithmetic may perform better on some tasks, **ProDistill provides more consistent improvements**, as reflected in the normalized average metric. >Q2: Additionally, regarding Figure 4 (left), which illustrates a clear positive relationship between validation set size and performance in vision tasks, can we expect similar trends to hold consistently for LLM tasks? Specifically, how do we explain or interpret the anomaly in Table 9, where 32-shot performance is higher than 64-shot performance **A2:** Thank you for this question. We conducted additional experiments and found that the inconsistency arises from **randomness in the sampling of validation data**. To confirm this and further investigate data scaling, we repeated the experiments from Table 9 using three different random seeds, and **extend the validation shot to 128**. The averaged performance of ProDistill is given in the table below. The updated results demonstrate a clear positive correlation between validation shot count and merging performance. | | AlpacaEval 2.0 | gsm8k| hendrycks math | human eval |mbpp |Avg | Normalized Avg| |-|-|-|-|-|-|-|-| | ProDistill 16 shots | 0.1124 | 0.6370 | 0.1302 | 0.2317 | 0.3120 | 0.2847 | 0.8798 | | ProDistill 32 shots | 0.1134 | 0.6387 | 0.1366 | 0.2317 | 0.3080 | 0.2857 | 0.8885 | | ProDistill 64 shots | 0.1136 | 0.6414 | 0.1338 | 0.2500 | 0.3030 | 0.2884 | 0.8929 | | ProDistill 128 shots | 0.1058 | 0.6624 | 0.1264 | 0.2866 | 0.3100 | 0.2982 | 0.9002 | To mitigate the effect of randomness, we **rerun all the LLM experiments for ProDistill** using three random seeds. The updated results can be found at https://anonymous.4open.science/r/Experiments-for-Reviewer-ZoNc-9701. >Q3: It would be helpful if the authors explained why a different set of baselines was chosen specifically for the LLM tasks compared to the other experiments. **A3:** We change the baselines for LLM tasks due to **scalability constraints**. Many of the baselines in vision and NLP tasks are computation or storage heavy, making them unsuitable for merging LLMs. For example, Adamerging requries storing each of the fine-tuned checkpoints in memory, and Localize-and-Stich requires solving an end-to-end optimization problems. Our method ProDistill is designed to be scalable and efficient, making it well-suited for merging LLMs without such constraints. We once again thank the reviewer for their valuable feedback and insightful questions. Please let us know if any further clarifications are needed. **Reference** [1] Ortiz-Jimenez G, Favero A, Frossard P. Task arithmetic in the tangent space: Improved editing of pre-trained models [2] Yadav P, Tam D, Choshen L, et al. Ties-merging: Resolving interference when merging models --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttal. I will keep the positive score.
null
null
null
null
null
null
Audio Flamingo 2: An Audio-Language Model with Long-Audio Understanding and Expert Reasoning Abilities
Accept (poster)
Summary: This paper introduces Audio Flamingo 2 (AF2), an advanced Audio-Language Model (ALM) designed for long-audio understanding and expert-level reasoning. AF2 leverages a custom CLAP model, synthetic Audio Question Answering (AQA) data, and a multi-stage curriculum learning strategy to achieve state-of-the-art performance in audio understanding and reasoning tasks. The model outperforms existing large-scale models despite having only a 3B parameter language model. Key contributions include the proposal of a new dataset, LongAudio, for long-audio understanding, and a benchmark, LongAudioBench, for evaluating ALMs on long audio segments. The paper also details innovations in data generation, architecture design, representation learning, and training strategies. AF2 demonstrates superior performance across 20+ benchmarks, including foundational audio understanding and reasoning tasks. Claims And Evidence: The claims made in the paper are well-supported by clear and convincing evidence. The authors demonstrate that AF2 outperforms existing state-of-the-art models, such as GAMA, Qwen-Audio, and Audio Flamingo, across multiple benchmarks. The performance improvements are attributed to the use of high-quality training data (AudioSkills), an improved audio encoder (AF-CLAP), and a novel three-stage curriculum training strategy. The paper provides extensive ablation studies to validate the impact of each design choice, including the use of different audio encoders, training data compositions, and training schedules. The results show that the proposed methods significantly enhance the model's reasoning capabilities and long-audio understanding. Methods And Evaluation Criteria: The methods and evaluation criteria proposed in the paper are well-suited for the problem of long-audio understanding and reasoning. The authors introduce LongAudio, a large-scale dataset with over 260k AQA instances, specifically designed for training and evaluating ALMs on long audio segments. The benchmark LongAudioBench further validates the model's performance on long-audio tasks. The evaluation criteria include both foundational audio understanding tasks (e.g., classification, captioning) and reasoning tasks (e.g., AQA, entailment), providing a comprehensive assessment of the model's capabilities. Popular benchmarks are also utilized. Theoretical Claims: This is not a theoretical paper. Experimental Designs Or Analyses: The experimental designs and analyses presented in the paper are sound and valid. The authors conduct extensive experiments to compare AF2 with existing state-of-the-art models across various benchmarks. The use of a three-stage curriculum training strategy is validated through ablation studies, demonstrating the importance of gradual context-length extension and data quality. The paper also explores the impact of different audio encoders and training data compositions, providing insights into the factors contributing to the model's performance. The results are consistent and highlight the effectiveness of the proposed methods. Supplementary Material: Prompt Engineering and Data Details. Relation To Broader Scientific Literature: The key contributions of this paper are closely related to the broader scientific literature on audio-language models and multi-modal learning. The work builds upon previous efforts in contrastive learning for audio-language alignment (e.g., CLAP, Wav2CLIP) and extends them by introducing a custom audio encoder (AF-CLAP) and a large-scale dataset for long-audio understanding (LongAudio). The focus on expert-level reasoning aligns with recent trends in developing models capable of complex reasoning tasks (e.g., GAMA, Qwen2-Audio, SALMONN). Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: Originality: The paper introduces several novel contributions, including the LongAudio dataset, the AF-CLAP audio encoder, and the multi-stage curriculum training strategy. These innovations collectively enhance the model's capabilities and set it apart from existing ALMs. Significance: The focus on long-audio understanding and expert reasoning addresses a gap in the current literature. The proposed methods and datasets are likely to drive further progress in this area. Clarity: The paper is well-written and provides a clear description of the methods, experiments, and results. The ablation studies and detailed analyses help readers understand the impact of each design choice. The paper contains enough details. Weakness: See Questions. Other Comments Or Suggestions: See Questions. Questions For Authors: 1. Will the dataset be released? It seems to be a huge project if the followers of this paper want to reproduce the dataset. 2. Why the authors filter out segments with audio-visual similarity above a threshold? Why a high relevance on audio and vision is not a good signal indicating a good data? How to filter out the video whose image are not quite relevant to the audio (e.g., a video with background music added)? 3. The audio feature seems to have a length of 64*2048, how to use it to calculate similarity with T5 feature? Mean-pooling or a linear? 4. Currently the AF2 seems to be an expert model on audio understanding. Can it be extended to 1) multiturn, 2) speech scenario (i.e., accept a spoken question and then answer)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your thorough review and constructive feedback. We have tried to address each of your concerns point by point. > Will the dataset be released? It seems to be a huge project if the followers of this paper want to reproduce the dataset. **Ans.**: Yes, absolutely! As stated on Page 1 of our paper, we will open-source all code, data, and model checkpoints. We have all approvals and plan to release the dataset publicly at the time of ICML notifications (if accepted). For the purpose of this rebuttal, we are providing access to a part of the QA data at this [anonymous link](https://github.com/anon-sub-openreview/af2). We kindly ask that you do not share this dataset for now. > Why the authors filter out segments with audio-visual similarity above a threshold? Why a high relevance on audio and vision is not a good signal indicating a good data? How to filter out the video whose image are not quite relevant to the audio (e.g., a video with background music added)? **Ans.**: We would like to clarify a potential misunderstanding. As stated in Lines 158-160 of the paper, audio-visual similarity is **not** used to discard low-quality segments—it is used to discard **redundant** or highly similar consecutive segments. Our goal is to promote diversity in the training data. Here's how the method works: 1. We segment each video (and its corresponding audio) into 10-second clips. 2. For each 10-second segment, we compute a **visual embedding** as follows: - We extract the middle frame from every 1-second interval (10 frames total). - Each frame is passed through CLIP to get an embedding, resulting in $\mathbb{R}^{10 \times 768}$. - These are mean-pooled to obtain a single visual embedding of shape $\mathbb{R}^{1 \times 768}$. 3. We compute the **audio embedding** of the 10-second audio using LAION-CLAP, also resulting in $\mathbb{R}^{1 \times 768}$. 4. We then average the audio and visual embeddings to obtain a single **audio-visual embedding** per segment: $\mathbb{R}^{1 \times 768}$. 5. Finally, we compute the cosine similarity between consecutive audio-visual embeddings and discard those with high similarity. This ensures diversity in both content and modality for CLAP training. To address the last part of your question: if a video has low audio-visual correspondence (e.g., background music over unrelated visuals), the embeddings tend to be **highly similar** across segments. As a result, such segments are **filtered out**, since they lack useful diversity. We will rewrite this line to make it clearer in the final version of our paper. > The audio feature seems to have a length of 64*2048, how to use it to calculate similarity with T5 feature? Mean-pooling or a linear? **Ans.**: Thank you for the question. We would like to clarify a potential misunderstanding. We follow a similar setup to LAION-CLAP for training our model, using **HTSAT-Large** as the audio encoder. The original HTSAT-Large model produces a feature embedding of shape $64 \times 2048$ for any 10-second audio clip. For CLAP training, these embeddings are **mean-pooled** and passed through a **linear projection layer**. Specifically: - Audio representations of shape $n \times 2048$ (after pooling) are obtained for a batch of size $n$. - These are then compared to text features from CLAP, also of shape $n \times 2048$, to compute similarity. This linear layer is trained jointly with the rest of the CLAP model. To integrate our trained CLAP with the LLM and build Audio Flamingo 2, we **remove the final linear layer** and instead use the raw $64 \times 2048$ embeddings from the last layer of HTSAT as input to the LLM. This provides the model with richer temporal representations for downstream multimodal understanding. > Currently the AF2 seems to be an expert model on audio understanding. Can it be extended to 1) multiturn, 2) speech scenario (i.e., accept a spoken question and then answer)? **Ans.** Thank you for the question. Yes, absolutely! 1) **Multi-turn Extension:** Just like the original Audio Flamingo, extending to multi-turn interaction is straightforward. We simply include the dialogue history in the text context and condition on prior audio inputs. These are separated using the special `<sep>` token, which is already a learned component in AF2. 2) **Speech Scenario Extension:** To support speech input, we add speech data to the training corpus. Empirically, we found that: - Incorporating **ASR data** during Stage 1, and - Including **Speech QA data** during Stage 2 significantly improves AF2’s speech understanding capabilities. Furthermore, we observed additional benefits when replacing CLAP with a **custom Whisper-based encoder** capable of handling both speech and music. This modification specifically enhances performance on ASR and speech-to-text translation tasks. Extending AF2 to better handle speech is an active area of our ongoing and future work.
Summary: This paper propose Audio flamingo 2, an audio language model with advanced audio understanding and reasoning abilities, demonstrated by state of the art performance on several benchmarks. The authors develop a custom clap model, a dataset called LongAudio to extend ALMs to 30s-5 minute audios, and another called AudioSkills with a focus on reasoning, and demonstrate the usefulness of curriculum learning. Also introduced is AF-CLAP, a novel method of doing contrastive pretraining, and 3-stage curriculum learning. The paper shows the use of LLMs to label synthetic data and has a demonstration of high-quality reasoning in the audio domain, all at a smaller size than competitor ALMs. Claims And Evidence: 1) The authors claim that improved data quality can trump scaling of compute/model size in terms of performance. They show this by developing two high quality datasets (LongAudio and AudioSkills) and comparing scaling with and without the high quality datasets, which shows considerable performance differences, and the claim is therefore well justified. 2) The authors claim that their AF-CLAP method is superior to other contrastive language-audio pre-training methods. They justify this claim by training other types of CLAP methods on the same data as their proposed method, and show that the resulting performance does not measure up to AF-CLAP. 3) The authors claim that curriculum learning is helpful and justify this by running detailed comparisons against different training schedules and showing results on a variety of benchmarks. Methods And Evaluation Criteria: The methods are well justified, the evaluation criteria is correct and comprehensive. Theoretical Claims: The paper is primarily empirical, there are no real proofs to check. Experimental Designs Or Analyses: The experiments are set up thoughtfully. Various training strategies are considered, different architectural decisions are ablated, dataset design and benchmark creation is well-done. Experimental design is excellent overall. Supplementary Material: I did a high-level overview of all the supplementary material. Relation To Broader Scientific Literature: The paper builds upon Audio Flamingo, it's predecessor model, and is directly inspired by contrastive language-audio pretraining. It uses several methods from prior work, including HTSTAT for generating audio embeddings and T5 for text embeddings. It relies on benchmarks that are well-suited to the tasks of reasoning and audio understanding that it is attempting to do. The authors also extend the ongoing discussion in the community around data quality via detailed data collection and ablations that specifically prove it's usefulness. Broadly the paper engages deeply with the existing audio understanding literature. Essential References Not Discussed: I did not find major references missing Other Strengths And Weaknesses: Strengths: 1. The breadth and comprehensiveness of the experimental work done here is impressive and is a clear indication of the strength of the results. 2. The scaling and data quality experiments are compelling, and the invention of new datasets are a welcome addition to the community. Weaknesses: 1. The section explaining the training objective could be improved by giving slightly more intuition to how the loss directly contributes to the goals of improving linguistic invariance and compositional reasoning. 2. The novelty of the paper is in the connection of various components and the invention of new datasets, not necessarily in new insights into the modeling method apart from the loss function. This is, however, a minor weakness and somewhat universal in large-scale pretraining focused work. ## update after rebuttal I am satistied with the authors responses to my question and will keep my score Other Comments Or Suggestions: None Questions For Authors: I do not have any questions of the authors. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for the encouraging review. We are happy you liked our paper. We'd just like to clarify that, in addition to the dataset contributions, our paper also presents several modeling insights that we believe are novel and impactful: - **Dynamic batching for efficient training:** As described in Appendix H.2, we introduce a dynamic batching strategy based on audio length. This significantly reduces padding, improves training efficiency, and yields faster convergence with better model performance. - **Efficiency in long audio modeling:** Section 3.2 explains how our AF2 module acts as an effective alternative to prefix-based architectures for long audio inputs. By leveraging cross-attention instead of increasing context length, we avoid the quadratic time complexity associated with standard attention mechanisms. - **Superiority of cross-attention over prefix tuning:** Section 6.5 demonstrates that cross-attention outperforms prefix tuning on the same dataset, indicating its effectiveness for audio-text alignment and long-context understanding. - **Curriculum learning strategy:** In Section 6.7, we compare 10 training schedules and show that our proposed curriculum learning strategy consistently achieves the best performance. We further highlight a counterintuitive insight: fine-tuning the language model weights degrades performance, and curriculum design is crucial for reasoning over long-form audio. - **High-quality data over large models:** Finally, Section 6.6 shows that training smaller LLMs on high-quality audio-text data results in performance that matches or exceeds that of larger LLMs trained on lower-quality data—underscoring the importance of data quality over model size for audio reasoning.
Summary: This paper introduces Audio Flamingo 2 (AF2), a small yet powerful Audio-Language Model (ALM) with advanced audio understanding and reasoning capabilities. AF2 leverages a custom CLAP model, synthetic AQA data, and a multi-stage curriculum learning strategy to achieve state-of-the-art performance across 20+ benchmarks. It extends audio understanding to long audio segments (30 secs - 5 mins) and introduces LongAudio, a novel dataset for long audio captioning and QA tasks. Claims And Evidence: The claims in the submission are generally supported by clear and convincing evidence. The paper presents Audio Flamingo 2 (AF2) and provides extensive experimental results across 20+ benchmarks, showing state-of-the-art performance. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the problem of audio understanding and reasoning. The introduction of LongAudio and LongAudioBench provides relevant datasets for evaluating long audio understanding, making the evaluation criteria comprehensive and appropriate. Theoretical Claims: The paper primarily focuses on empirical results and architectural innovations rather than presenting formal theoretical proofs. It describes the model's design, training strategies, and performance on various benchmarks without delving into theoretical derivations or proofs. Experimental Designs Or Analyses: The authors conducted extensive ablation studies to confirm the efficacy of their approach, including comparisons of different CLAP models, data compositions, training schedules, and LLM sizes. They also introduced LongAudioBench, an expert-annotated benchmark for evaluating ALMs on long audio understanding. However, the reliance on synthetic data and LLM-as-a-judge framework for evaluation could introduce biases or limitations in the results. Supplementary Material: No provided. Relation To Broader Scientific Literature: Not applicable. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. Proposes innovative data generation, architecture design, and training strategies. 2. Achieves SOTA performance across 20+ benchmarks with a smaller model. Weaknesses: 1. Limited discussion on the model's ability to understand speech content. 2. Potential over-reliance on synthetic data for training, which may affect generalization. Other Comments Or Suggestions: **Data Accessibility**: While the authors mention open-sourcing code and data, providing a clear timeline or repository link would enhance accessibility. Questions For Authors: What are the potential limitations of using GPT-4o for generating QA pairs in the AudioSkills dataset, and how did you mitigate these limitations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your thorough review and constructive feedback. We have tried to address each of your concerns point by point. > Data Accessibility: While the authors mention open-sourcing code and data, providing a clear timeline or repository link would enhance accessibility. **Ans.** As stated on Page 1 of our paper, we will open-source all code, data, and model checkpoints. We have all approvals and plan to release the dataset publicly at the time of ICML notifications (if accepted). This will include all QAs and audios released on GitHub and HuggingFace. For the purpose of this rebuttal, we are providing access to a part of the QA data at this [anonymous link](https://github.com/anon-sub-openreview/af2). We kindly ask that you do not share this dataset for now. > Limited discussion on the model's ability to understand speech content. As mentioned in the Abstract and Introduction, Audio Flamingo 2, like its predecessor, primarily focuses on sounds and music. Speech content understanding is intentionally out of scope for AF2, aligning with other models in this family such as Pengi [1], GAMA [2], and LTU [3]. That said, extending AF2 to handle speech is part of our ongoing and future work. We would like to share some preliminary insights: - **Minimal changes needed for speech support:** Incorporating speech understanding simply requires adding speech data to the training corpus. - **Stage-wise improvements:** We observe that including ASR data in Stage 1 and Speech QA data in Stage 2 significantly improves AF2’s performance on speech tasks. - **Replacing CLAP with Whisper:** Using a customized Whisper model in place of CLAP enhances AF2’s ability to process speech alongside music, improving performance on ASR and speech-to-text translation tasks. We are actively working on these extensions and will explore them more fully in future work. > Potential over-reliance on synthetic data for training, which may affect generalization. **Ans.** Thank you for the thoughtful question! Most of the audio used in the *AudioSkills* and *LongAudio* datasets is sourced from real-world recordings (as mentioned in the paper). The only exception is the *Counting* skill, which accounts for less than 1% of the total audio data (see Table 14 for detailed statistics). While the question-answer QA pairs are synthetically generated, this modality is inherently artificial—even in human annotations. Importantly, our use of real-world audio ensures there is no sim-to-real gap during training or inference. > However, the reliance on synthetic data and LLM-as-a-judge framework for evaluation could introduce biases or limitations in the results. **Ans.** Thank you for raising this concern. We adopt the *LLM-as-a-judge* evaluation framework following widely used practices in long-form video and multimodal benchmarks such as Video-MME [5], as well as other recent LLM evaluation benchmarks for audio understanding such as CompA-R [1] and AIR-Bench [5]. Prior work [4] has demonstrated that this framework provides a more robust and semantically grounded measure of generation quality, especially for open-ended tasks. It also mitigates common issues in traditional evaluation approaches, such as overly strict regex matching or limited coverage of reference answers. To reduce potential bias, we carefully designed our evaluation prompts (see Fig. 25) to be neutral, consistent, and faithful to the context of each QA pair. We acknowledge that synthetic evaluation frameworks have limitations, and we view this as an active area of improvement for future work. ### Citations [1] GAMA: A Large Audio-Language Model with Advanced Audio Understanding and Complex Reasoning Abilities (https://aclanthology.org/2024.emnlp-main.361/). [2] Pengi: An Audio Language Model for Audio Tasks (https://openreview.net/forum?id=gJLAfO4KUq). [3] Listen, Think, and Understand (https://openreview.net/forum?id=nBZBPXdJlC). [4] AIR-Bench: Benchmarking Large Audio-Language Models via Generative Comprehension (https://aclanthology.org/2024.acl-long.109/). [5] Video-MME The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis (https://arxiv.org/abs/2405.21075). [6] Visual Description Grounding Reduces Hallucinations and Boosts Reasoning in LVLMs (https://openreview.net/forum?id=3PRvlT8b1R)
Summary: This paper introduces a state-of-the-art audio understanding LLM, with a focus on long and complex acoustic scenes. Audio understanding has so far been limited to superficial captioning of individual sound events often generating artificially inflated captions that try to give an illusion of complexity while having no more substance than a simple classifier. Thus, developing models that show an actual ability to analyze complex audios in depth, and provide precise sound scene analyses is of wide interest to the audio community. This paper significantly improves over the previous sota in that regard, being competitive with powerful baselines such as Gemini. However, my main concern is about the nature of this papers' contribution: while a few interesting training ideas are introduced, the framework and architecture are almost identical to the original Audio Flamingo, while the training data brings most of the improvement in performance. As a reader, the main outcome of this paper clearly appears to be the proposed training datasets and the long context benchmark. Thus, I would like to ask the authors to explicitly explain their plan in releasing the training data and the evaluation benchmark during the rebuttal period. Moreover, a potential concern is the appropriateness of such a dataset paper for publication at ICML. I think the performance gains that this dataset provides, and the very welcome introduction of a benchmark that goes beyond superficial acoustic labeling make it worth for publication if this data is released. Claims And Evidence: Claims are clearly supported through extensive experiments. Methods And Evaluation Criteria: The paper comes with some interesting modeling ideas for contrastive training: a "weak" audio captioning model and a visual captioning one to produce rich audio captions makes a lot of sense. Similarly, using captions with correct content but incorrect chronology as negatives during contrastive training is also a simple and interesting trick. Yet, the improvement from this methodology is marginal wrt the gains obtained from the introduced synthetic dataset. Theoretical Claims: None. Experimental Designs Or Analyses: Experiments cover a wide range of audio domains and several baselines among the strongest models for audio understanding LLMs. What is a bit unusual with this paper is that it is both presented as a model paper and a dataset paper. I believe that a clearer narrative would have focused on the dataset and the evaluation, while benchmarking several existing models on this data, which would leave even more room for discussing in depth the challenges of creating audio understanding datasets. Supplementary Material: I reviewed everything except the detail of the prompts. Relation To Broader Scientific Literature: I believe the main impact will be the dataset and the long audio benchmark. The contrastive training tricks will probably also become standard, however they are likely more marginal in terms of impact. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: L168,C2: "to condition audio representations on the LLM" -> "to condition the LLM on audio representations" Questions For Authors: Update post-rebuttal: I appreciate the authors commitment to release a dataset that will be a significant contribution to the field of audio understanding by allowing for more complex and realistic scenarios. I thus increase my score. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your thorough review and constructive feedback. We have tried to address each of your concerns point by point. > Thus, I would like to ask the authors to explicitly explain their plan in releasing the training data and the evaluation benchmark during the rebuttal period. **Ans.** As stated on Page 1 of our paper, we will open-source all code, data, and model checkpoints. We have all approvals and plan to release the dataset publicly at the time of ICML notifications (if accepted). This will include all QAs and audios released on GitHub and HuggingFace. For the purpose of this rebuttal, we are providing access to a part of the QA data at this [anonymous link](https://github.com/anon-sub-openreview/af2). We kindly ask that you do not share this dataset for now. > I believe that a clearer narrative would have focused on the dataset and the evaluation, while benchmarking several existing models on this data, which would leave even more room for discussing in depth the challenges of creating audio understanding datasets. **Ans.** Thank you for the suggestion. We did benchmark the strongest fully open-source LALM with available training code—**GAMA**—on our dataset. As shown in Table 5, simply introducing new data is insufficient to significantly improve audio understanding performance. Our results highlight that **it is the combination of better audio perception** (via stronger audio encoder), the **cross-attention architecture**, and **curriculum learning** that leads to meaningful improvements. This holistic approach is critical for the unique challenges posed by long audio understanding. Moreover, our architecture choice of cross-attention also supports our first exploration of **tackling long-form audio reasoning** . Section 3.2 of the paper details how our cross-attention mechanism offers a more efficient and effective alternative to prefix-based architectures, avoiding the quadratic time complexity that arises from increasing context length in standard attention mechanisms. > On contributions beyond just data (AudioSkills) - **Efficiency in long audio modeling:** Section 3.2 explains how our AF2 module acts as an effective alternative to prefix-based architectures for long audio inputs. By leveraging cross-attention instead of increasing context length, we avoid the quadratic time complexity associated with standard attention mechanisms - **Dynamic batching for efficient training:** As described in Appendix H.2, we introduce a dynamic batching strategy based on audio length. This significantly reduces padding, improves training efficiency, and yields faster convergence with better model performance. - **Superiority of cross-attention over prefix tuning:** Section 6.5 demonstrates that cross-attention outperforms prefix tuning on the same dataset, indicating its effectiveness for audio-text alignment and long-context understanding. - **Curriculum learning strategy:** In Section 6.7, we compare 10 training schedules and show that our proposed curriculum learning strategy consistently achieves the best performance. We further highlight a counterintuitive insight: fine-tuning the language model weights degrades performance, and curriculum design is crucial for reasoning over long-form audio. - **High-quality data over large models:** Finally, Section 6.6 shows that training smaller LLMs on high-quality audio-text data results in performance that matches or exceeds that of larger LLMs trained on lower-quality data—underscoring the importance of data quality over model size for audio reasoning. - **Long audio modeling:** (also acknowledged by the reviewer) We explore long audio reasoning for the first time and make attempts to build training and benchmarking datasets for this purpose. - **A proof that better and robust audio representations can improve performance:** Table 3 highlights that AF-CLAP improves Audio Flamningo 2 performance across benchmarks. Beyond the chosen benchmarks (primalrity taken from literature), as also mentioned in Section 3.1.1, AF-CLAP may help Audio Flamningo 2 performance in various real-world scenarios by expanding its breadth of audio understanding (e.g., home sounds). Finally, robust representations also help our UnusualQA task (in demo) as the representations capture fine-grained information about the input audio.
null
null
null
null
null
null
Relational Invariant Learning for Robust Solvation Free Energy Prediction
Accept (spotlight poster)
Summary: The paper proposes Relational Invariant Learning framework for solvation free energy prediction. RILOOD consists of three key components: a mixed conditional modeling module to integrate data from different environments, a multi-granularity refinement strategy for context-aware representation learning, and an invariant learning module to identify robust patterns that generalize to unseen environments. Experimental results demonstrate that RILOOD significantly outperforms existing SOTA methods across various datasets, showcasing its effectiveness in capturing complex molecular interactions. The paper highlights the challenges of OOD scenarios and suggests future research directions to balance broad generalization with domain-specific precision. Claims And Evidence: Yes. Methods And Evaluation Criteria: I don’t think the baseline comparison is strong enough due to the insufficiency of OOD datasets. Compared to Table 4, which includes a broader range of datasets, Table 1 only compares three datasets. The authors should consider including more datasets to better illustrate the effectiveness of the method. Theoretical Claims: I carefully reviewed the proof in the appendix, and the proof of theoretical claims is generally well structured, but there are some points that require further clarification and rigor, and addressing these issues will increase the credibility of the theoretical framework. - The theoretical rationale for the uncertainty constraint of RMSE in A.1 is interesting, but the proof provided lacks sufficient detail. It would be helpful to have a more comprehensive analysis of how this constraint affects the representation of learning, and to provide formal evidence of its effectiveness in improving the robustness of the model. - For proof A.2, acknowledging any potential limitations or assumptions that may not hold in certain scenarios would provide a more balanced view. Experimental Designs Or Analyses: This paper conducted a variety of experiments to illustrate the effectiveness of the method, including comparative experiments, ablation experiments, and super-parameter experiments, and gave detailed analyses. Supplementary Material: Yes, I checked the appendix. Relation To Broader Scientific Literature: In this paper, the authors extend the traditional molecular property prediction to solvent environment, and consider the role of solvents, which is helpful to further expand the practical application of existing AI for science. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths** - The motivation of the paper is clearly defined. - This paper is well-written and effectively organized. - The topics discussed are very important in practical application. - The experiments show that RILOOD achieves robust solvation free energy prediction and outperforms baseline models under distribution shifts. **Weaknesses** - Personally, I do not find CVAE to be particularly innovative, as it is often used in molecular generation[1]. - This paper is only focused on the task of solvation free energy prediction, and the method is difficult to extend to general molecular property prediction or DDI tasks. - Authors should add case studies illustrating generalization performance in practical applications. [1] Lim, Jaechang, et al. "Molecular generative model based on conditional variational autoencoder for de novo molecular design." Journal of cheminformatics 10 (2018): 1-9. Other Comments Or Suggestions: - The appendix in the capital of Table 1 is incorrect. - Fig.1(b), the abscissa SMILES should be the corresponding name for ease of understanding. - There is an overlap in the data on the Fig.3 (b). Questions For Authors: - What does the molecular interaction in Fig.1(a) mean by arrows of the same color? How to account for interaction invariance? - Why is w/o *MCAR* better than *B+MCAR[M]* in Table 2? - What is the difference between c_r, C, c_1, c_2, C? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer b6ge: Thank you for your thoughtful suggestions and questions. We have provided point-by-point answers to each weakness and question. ## Weakness **W1. Personally, I do not find CVAE to be particularly innovative.** While we understand your perspective on the novelty of the CVAE framework, our focus in this paper is not solely on theoretical innovation or technological advancement. Instead, we aim to address a valuable domain-specific problem: solvent generalization in practical applications. We believe that our work is pioneering in its application of CVAE to the context of solvent generalization, which has significant implications for real-world scenarios. By concentrating on this specific challenge, we provide insights and solutions that can enhance the predictive capabilities of models in the field of solvation properties. **W2. This paper is only focused on the task of solvation free energy prediction, and the method is difficult to extend to general molecular property prediction or DDI tasks.** As we mentioned in the limitation, the types of environments don't increase indefinitely. Considering the scalability and computational cost of the model, when there are too many types of solvents, clustering can be performed first, and then the pseudo-labels of the clustering can be conditionally modeled, so we think that our approach can be extended to more task types in the future. **W3. More case study:** We've added some solvent-holdout test cases to verify real-world cases. Due to time constraints, we only selected a few cases. As can be seen from the case studies, the prediction error is still low for different types of solvents, including alkanes, alcohols, and olefins. Table. The results of RILOOD on solvents holdout test. | Solvent | RMSE | |-------|-------| | Tetraglyme | 0.1004 | | Pentadecane | 0.1100 | | Tetradecane | 0.2462 | | 1-Dodecanol | 0.2315 | | 1-bromonaphthalene | 0.7796 | | Tetraglycol | 0.2015 | **Minor issue.** We’ve corrected errors and updated the manuscript. ## Question: **Q2. Why is w/o MCAR better than B+MCAR[M] in Table 2?** w/o MCAR is better than B+MCAR[M] and only performs on one dataset, which may be due to the fact that w/o MCAR only removes the interactive module and the objective function does not change, which leads to the model being reverse-optimized. **Q3. What is the difference between $c^r$, $C$, $c_1$, $c_2$?** $C$ is the collection of all solvents, $c^r$ refers to a specific class of solvents, $c_1$ refers to the type of solute, and $c_2$ refers to the type of solvent We hope that our response will address your concerns and better clarify the contribution and value of our work. --- Rebuttal Comment 1.1: Comment: Thanks so much for your thorough rebuttal. It addressed the majority of my concerns, and I'm willing to raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful review. We are very grateful for your meticulous review and constructive criticism of our work.
Summary: This paper investigates the challenge of out-of-distribution generalization across different environments in molecular solvation free energy prediction and introduces the RILOOD framework. RILOOD integrates mixup-based conditional modeling, a multi-granularity refinement strategy, and an invariant relational learning module to alleviate the limitations of traditional methods that overly rely on core substructures. Experimental results demonstrate that RILOOD significantly outperforms existing methods on multiple benchmark datasets. Claims And Evidence: Some of the paper's claims have minor issues. The difference among environments, environmental conditions, and solvent categories is not clearly expressed. Methods And Evaluation Criteria: The paper evaluates the effectiveness of the model and improving generalization in OOD scenarios. Extensive experiments and theoretical analysis prove the superiority of the method. Theoretical Claims: The paper appears to provide sufficient theoretical analysis. Experimental Designs Or Analyses: 1. The proposed method is evaluated using RMSE, MAE, AUROC, and Accuracy, with experiments conducted on both real-world and synthetic datasets. 2. t-SNE is used to visualize molecular interactions in the best performing model. Supplementary Material: The supplementary material provides detailed proofs and includes some interesting experiments. Relation To Broader Scientific Literature: This work is important. Predicting the solvation free energy of molecules in out-of-distribution (OOD) scenarios has greater practical value compared to identically distributed scenarios. Essential References Not Discussed: I believe that essential references have been sufficiently discussed. Other Strengths And Weaknesses: Strengths 1. Figure 1 clearly presents the motivation of the work. 2. The experiments include numerous visualizations that further validate the paper’s claims. 3. The performance improvements are significant. Weaknesses 1. RILOOD integrates several modules to capture complex molecular interactions, but this results in a relatively complex model structure. 2. The distinction between "surrounding environments" and "support environments" is unclear. 3. Some variables are repeatedly defined (e.g., "K" is used for both surrounding environments and attention keys, "c" represents both coarse-grained conditions and environmental conditions); please check and reduce this confusion. 4. Both Sections 3.1 and 3.2 include a problem formulation, which is confusing. Other Comments Or Suggestions: 1. Lines 161-164 and 263-266 are blank. 2. There are too many variables, making the paper somewhat difficult to read. 3. In Eq. (4), $G$ should be $\mathcal{G}$. 4. There is a missing cite at line 332 ("Appendix ??"). Questions For Authors: Please refer to the above comments. Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer 9m7P: Thank you for your thoughtful suggestions and questions.We have provided point-by-point answers to each weakness and question. **W1. RILOOD integrates several modules to capture complex molecular interactions, but this results in a relatively complex model structure:** We understand your concerns regarding the complexity of the model. We chose this complex model structure because it allows us to better capture the intricate interactions between molecules, which is crucial for improving the accuracy and reliability of our predictions. Although the model structure is relatively complex, we believe that this complexity enables the model to gain a more comprehensive understanding of molecular characteristics, thereby providing better performance in practical applications. Additionally, we have also considered interpretability in our model design and have made efforts to ensure that each module of the model has a clear function. We are also considering possible simplifications in future work to reduce complexity while maintaining model performance. We would greatly appreciate any further suggestions you may have regarding this. **W2. The distinction between "surrounding environments" and "support environments" is unclear:** "Surrounding environments" and "support environments" mean the same thing. We'll change it to a unified expression. **W3. Symbols are defined repeatedly:** Thank you for your corrections, and we will double-check for further proofreading. **W4. Both Sections 3.1 and 3.2 include a problem formulation, which is confusing:** The problem formulation in 3.1 is the molecular interaction prediction task in general, mainly under the IID setting; The problem formulation in 3.2, the purpose of OOD setting is to emphasize the difference between the distribution of the training set and the test set. We'll update the language to make it easier to understand. **Minor issues:** We’ve corrected errors and updated the description in the paper. Thank you for your valuable comments, and we hope that our response will address your concerns and better clarify the contribution and value of our work. --- Rebuttal Comment 1.1: Comment: Thank you very much for your detailed and thoughtful rebuttal. It effectively addressed most of my concerns, and based on this, I will revise my rating upward.
Summary: In this paper, the authors presents the Relational Invariant Learning framework (RILOOD) to improve OOD generalization in solvation free energy prediction. RILOOD learns invariant molecular representations in varied environments and applies mixed-enhanced molecular features for modeling environmental diversity. Extensive experiments demonstrate its superiority over existing methods under various distribution shifts. Claims And Evidence: Yes. Claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: The authors provide a wealth of benchmarks to prove the superiority of the proposed method, and the evaluation method is effective. Theoretical Claims: Yes. Experimental Designs Or Analyses: The authors provide a large number of experiments to illustrate the advanced nature of the proposed method, and the experimental analysis is reasonable. Supplementary Material: Yes. Relation To Broader Scientific Literature: No. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** 1. The author offers a new invariant learning method, which, as far as I know, is novel 2. Compared with the SOTA method, the experimental results show that the proposed method performs well. 3. The author provides a theoretical proof. **Weaknesses** 1. Although theoretical support is provided, there is no innovation in theory 2. Figure 1 is intended to illustrate the invariance of interactions, but (a) the illustration is not easy to understand. 3. There is no clear definition of what invariant interaction patterns are. 4. Since obtaining explicit environment labels for solute-solvent pairs is often infeasible. So how is this method obtained? Other Comments Or Suggestions: The writing is not rigorous, and it is repeated in many places。 - What does multiple environments in line 163 $\varepsilon$ mean? How it is and the 115 line of $\varepsilon$ is clearly not a representation. - Is the $l$ in "drawing samples $z^{(l)} (l = 1, 2, ..., l)$" the same as the $l$ in "$h_{l+1} = PReLU(W_lh_l + b_l)$" in Equtation 10? $l$ is the same as $l$ in loss, it is recommended to change the symbol. Questions For Authors: 1. Could you provide real-world examples to further illustrate the OOD generalization of RILOOD? 2. About tsne, MCAR how to enhance feature diversity? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer zysq: Thank you for your thoughtful suggestions and questions. We have provided point-by-point answers to each weakness and question. ## Weakness **W1. About theoretical innovation:** We acknowledge that our study may not introduce entirely new theoretical concepts, we believe that the theoretical support we provided is crucial for grounding our research in established frameworks, particularly in the context of invariant learning for predicting solvation properties. Our research contributes significantly to the field by demonstrating how invariant learning can effectively capture the complex relationships between molecular structures and their solvation properties. By integrating this methodology, we not only improve prediction accuracy but also provide a more interpretable framework for understanding the underlying molecular interactions. **W2. Figure 1 is intended to illustrate the invariance of interactions, but (a) the illustration is not easy to understand.** As shown in the figure, the horizontal axis represents solvent polarity, while the dotted box illustrates the distribution of different solute properties in the two solvents. Traditional methods predict solute properties by identifying core substructures; however, these core substructures may vary across different solvents. The key to our approach is to establish invariance by modeling interactions, which allows for a deeper understanding of the diverse characteristics of solute molecules in various solvents. The green arrows in the lower part of the figure indicate the interactions between molecules. **W3. There is no clear definition of what invariant interaction patterns are.** Invariant interaction patterns are features that remain consistent across various environments, or conditions. This involves identifying patterns that reflect interactions in diverse molecular contexts. Through invariant learning, we can extract key features from complex molecular data that capture fundamental interactions between molecules, independent of specific conditions. The model can still make accurate predictions even with unseen data or conditions. This capability is crucial for studying molecular interactions, as molecules may behave differently in various biological environments or experimental settings. **W4. Since obtaining explicit environment labels for solute-solvent pairs is often infeasible. So how is this method obtained?** We emphasize that environmental labels are not readily available, so we use auxiliary information to obtain environmental labels, i.e., other information that is easier to obtain. For example, the type of solvent, the scaffold of the molecule, etc., rather than the molecular interaction information obtained through complex modeling calculations as environmental labels. **Minor issues:** We’ve corrected errors and updated the description in the paper. ## Questions **Q1. Could you provide real-world examples to further illustrate the OOD generalization of RILOOD?** We provide solvent holdout case studies for your reference. Due to time constraints, we only selected a few cases. As can be seen from the case studies, the prediction error is still low for different types of solvents, including alkanes, alcohols, and olefins. Table. The results of RILOOD on solvents holdout test. | Solvent | RMSE | |-------|-------| | Tetraglyme | 0.1004 | | Pentadecane | 0.1100 | | Tetradecane | 0.2462 | | 1-Dodecanol | 0.2315 | | 1-bromonaphthalene | 0.7796 | | Tetraglycol | 0.2015 | **Q2. About tsne, MCAR how to enhance feature diversity?** In fig.4(b), the feature distribution of MCAR is more dispersed than the original distribution, indicating that it contains more information and can better reflect the differences between samples after multi-interaction. We hope that our response will address your concerns and better clarify the contribution and value of our work.
Summary: This paper presents a novel out-of-distribution learning method for addressing the challenge of predicting solvation free energy in molecular interactions. The key innovation lies in the authors' approach to modeling the distribution of molecular interactions. They validated the effectiveness of their proposed model by testing it under out-of-distribution conditions. Claims And Evidence: A few statements lack adequate support, such as the one in line 277: “...the decisive interaction patterns of solute vary, leading to distinct solute properties.” Methods And Evaluation Criteria: The methodology and evaluation are reasonable. Theoretical Claims: Yes, I checked the theoretical claims. Experimental Designs Or Analyses: The authors do not provide a specific description of the division of the dataset, such as the number of environments in the test set and the training set, which is not discussed in the experiment details and may make the comparisons unfair. Supplementary Material: Yes, I checked the appendix. Relation To Broader Scientific Literature: This paper expands the scenarios of molecular applications and helps to expand to real-world scenarios. Essential References Not Discussed: I note that MMGNN is similar to this work, but the authors does not make a comparison. [1] MMGNN: A Molecular Merged Graph Neural Network for Explainable Solvation Free Energy Prediction Other Strengths And Weaknesses: **Strengths** - This paper addresses the OOD problem of molecular solvation free property prediction and is a pioneer work in this area. - The proposed method relies on capturing the feature of molecular interactions that are invariant across environments, which is novelty. **Weaknesses** - The authors do not provide a specific description of the dataset partition settings, such as the number of environments in the test set and the training set, which are not discussed in the experimental setup and may make the comparisons unfair. - This paper is similar to CMRL in that it also solves OOD problems through causality, please explain how it differs from CMRL? - The hyperparameter alpha appears to be redundantly defined in both line 248 and Eq. 14. Additionally, the authors do not clarify how to choose alpha in line 248. - Fig. 3 (c) is not detailed enough to provide a description of the specific tasks, please further elaborate the description. Other Comments Or Suggestions: - Fig. 1 uses some terms, such as polarity, dielectric constants, but they are not explained in the main text, what does it have to do with the invariance in this paper? - $\mathbb{R} ^{N_x}$ in proposition 4.2, here, $N_x$ is missing definition. Questions For Authors: - Please explain “varying degrees between C and the label Y” in line 410? - For other questions, please refer to the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer Jwej: We sincerely appreciate the reviewer's thoughtful suggestions and questions. We have provided point-by-point answers to each weakness and question. ## Weakness **W1. Details about datasets:** We provided splitting details in appendix. The method of scaffold split is available in MoleculeNet, and the solvent split is the same as that of scaffold split. As shown in definition 1, we divide the dataset according to different environments, where scaffold and solvent are based on different environments. **W2. Different from CMRL:** CMRL acts as an invariant through the substructure of the interaction between molecular pairs, while at the same time weakens the influence of solvent molecules through backdoor adjustment. However, this invariance does not apply to solvent environment cases, because the interaction between solvents and solutes is multi-level, different levels of molecular interaction affect different properties. In contrast, our method recognizes that invariance is not consistent across different solvent environments due to the complex, multihierarchical nature of solute-solvent interactions, where varying interaction scales impact molecular properties differently. **W3. The choice of hyperparameter alpha:** For alpha, we chose 0.5 in order to get a uniform distribution. **W4. Minor issues:** Thanks for pointing out our omission. Figure 3 shows the results of the COMPSOLV dataset under 4 OOD settings. We've updated the graph annotations for fig.3 (c). ## Other Comments Or Suggestions **C1. Fig. 1 uses some terms, such as polarity, dielectric constants, but they are not explained in the main text, what does it have to do with the invariance in this paper?** Some of the proper nouns on the Fig. 1 is solvent related properties such as dielectric constant, polarity of solvent, which are included in solvent. By constructing multi-level interactions, the effects of these properties on solute molecules can be learned, and invariance can be further learned. **Differences from MMGNN:** MMGNN is a model for modeling the relationship between solvents and solutes, and our method focuses on the generalization of molecules in different solvents, which are fundamentally different. In contrast, our approach recognizes that invariance is not uniform in solvent environments due to the multihierarchical nature of solute-solvent interactions, where different interaction scales affect various molecular properties. To solve this problem, we introduce a multi-granularity context-aware refinement mechanism that captures local and global interactions to ensure robust performance under different solvent conditions. ## Claims And Evidence: The statement “the decisive interaction patterns of solute vary, leading to distinct solute properties” can be supported by references to PAR[1] and Unimatch[2]. We have included these references for further clarification. ## Questions **Q1: “...varying degrees between C and the label Y” in line 410?** The spurious relationships of varying degrees between C and Y means that spurious correlations are introduced by controlling the distribution of variant subgraphs. We hope that our response will address your concerns and better clarify the contribution and value of our work. ## Reference [1] Property-Aware Relation Networks for Few-Shot Molecular Property Prediction [2] UniMatch: Universal Matching from Atom to Task for Few-Shot Drug Discovery --- Rebuttal Comment 1.1: Comment: Thank you for the responses. The authors clarified the differences from previous works and addressed my concerns about the dataset partition settings. I also considered the comments and responses from other reviewers. Based on that, I would keep positive score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful comments. We are very grateful for your meticulous review and constructive criticism of our manuscript. Your comments are insightful and help us better understand how to improve our work.
null
null
null
null
null
null
Variance-Reduced Forward-Reflected-Backward Splitting Methods for Nonmonotone Generalized Equations
Accept (poster)
Summary: This paper proposes two stochastic variance-reduction algorithms to solve a class of nonmonotone equations. The key technical tool used in this paper is the intermediate object $S_{\gamma}$. The authors apply classical variance reduction techniques on $S_{\gamma}$ instead of the operator $G$, and they show their proposed algorithms obtain convergence rates of $\mathcal{O}(n+n^{2/3}\epsilon^{-2})$. ## update after rebuttal The author's reply makes sense, so I keep my score. Claims And Evidence: Strength: * The authors provide a theoretical analysis of their proposed approaches and show their algorithm enjoys a convergence rate of $\mathcal{O}(n+n^{2/3}\epsilon^{-2})$. * Applying variance reduction on the intermediate object $S_{\gamma}$ is a novel and interesting idea. Weakness: * Paper organization is not good enough. The comparison with (Cai et al. 2023) can be presented in the main text instead. The comparison with existing work can be summarized in a table. The author may consider highlighting the co-coercive condition presented in (Cai et al. 2023) in the main text. * The convergence rate of the proposed methods is inferior to the methods obtained in (Cai et al. 2023). Could the authors recover the $\mathcal{O}(n+n^{1/2}\epsilon^{-2})$ convergence rate if the co-coercive assumption is imposed? * In Theorem 3.1, the choice of $\kappa$ seems restrictive. * In the experiments, the author may consider comparing with the methods proposed in (Cai et al. 2023). Methods And Evaluation Criteria: See above. Theoretical Claims: See above. Experimental Designs Or Analyses: See above. Supplementary Material: I have read some parts of supplementary materials including the appendix A, B, and part of the proofs in C. The theoretical analysis in the supplementary material is sound and easy to follow. Relation To Broader Scientific Literature: It is an interesting work for solving large-scale nonmonotone equations. Overall, the idea of applying variance reduction on the intermediate object is novel and interesting. Essential References Not Discussed: I think the most relevant references have been discussed/ Other Strengths And Weaknesses: Some other weaknesses include: * In both VFR-SVRG and VFR-SAGA algorithms, the minibatch size is quite large. It is of the order $\mathcal{O}(n^{2/3})$, which is larger than most minibatch sizes of the stochastic methods solving nonconvex optimization. * The choice of $\gamma$ is quite restrictive. It must belong to the range of $(1/2, 1)$, so it cannot recover the classic FRBS and optimistic gradient methods. Could the authors provide some intuition as to why $\gamma$ cannot be chosen $1/2$ for the proposed approaches? Other Comments Or Suggestions: * In Corollary 3.3, line 324, why there is a $p$ in the denominator? The SAGA estimator no longer needs probability $p$. * A table summarization of existing work with different assumptions and convergence criteria could be provided. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: First of all, we acknowledge the reviewers for his/her comments and feedback on our work. Below is our response to each point. P1. Weakness: + Q1.1: Paper organization is not good enough. The comparison with (Cai et al. 2023) can be presented in the main text instead. The comparison with existing work can be summarized in a table. The author may consider highlighting the co-coercive condition presented in (Cai et al. 2023) in the main text. > R1.1: Thank you for your comment. We will try to improve the organization of the paper as suggested. We will bring the comparison with [Cai et al 2023] back to the main text, and highlight the co-coercivity condition used in [Cai et al 2023] vs. our assumptions. This co-coercivity condition is stronger than monotonicity+Lipschitz, and quite limited since it does not cover bilinear matrix games. We will also add a table summarizing the comparison with existing works. + Q1.2: The convergence rate of the proposed methods is inferior to the methods obtained in (Cai et al. 2023). Could the authors recover the convergence rate if the co-coercive assumption is imposed? > R1.2: In terms of convergence rates, our rate is slower than [Cai et al. 2023] ($1/k$ vs. $1/k^2$) when solving co-coercive equations. However, since our assumptions are significantly weaker, it is unclear if this theoretical comparison is fair. Note that Halpern's method in [Cai et al 2023] is a type of accelerated method, while our method is not. We believe that $O(1/k)$ is the best rate one can get for non-monotone problems, similar to nonconvex optimization methods, though we do not have a lower bound result to compare at this moment. In nonconvex optimization, this rate is optimal (up to a constant factor). + Q1.3: In Theorem 3.1, the choice of $\kappa$ seems restrictive. > A1.3: So far, the range of $\kappa$ is indeed restrictive since we did not focus on optimizing $M$ and $\gamma$ in Theorems 3.1 and 4.1. In deterministic methods, the largest range of $L\kappa$ is $L\kappa < 1$. We believe that our range on $L\kappa$ can also be improved, but it requires substantial changes in the algorithmic design and analysis. + Q1.4: In the experiments, the author may consider comparing with the methods proposed in (Cai et al. 2023). > A1.4: Yes, we will add some examples to compare with [Cai et al 2023] for co-coercive problems as suggested. P2: Other Strengths And Weaknesses: + Q2.1: In both VFR-SVRG and VFR-SAGA algorithms, the minibatch size is quite large. It is of the order $\mathcal{O}(n^{2/3})$, which is larger than most minibatch sizes of the stochastic methods solving nonconvex optimization. > A2.1: This choice of mini-batch (i.e., $b = \mathcal{O}(n^{2/3})$) is to attain the complexity of $\mathcal{O}(n + n^{2/3}\epsilon^2)$. Our theory works with any batch size $b \in [1, n]$. However, the choice of $b$ affects the final oracle complexity. Therefore, in practice, we can choose any batch size, and our method still converges, but the overall oracle complexity will be worse than the best one $\mathcal{O}(n + n^{2/3}\epsilon^2)$. We will add a brief discussion on this aspect in the revision. + Q2.2: The choice of $\gamma$ is quite restrictive. It must belong to the range of $(1/2, 1)$, so it cannot recover the classic FRBS and optimistic gradient methods. Could the authors provide some intuition as to why cannot be chosen for the proposed approaches? > A2.2: As can be seen from (22) of Lemma C.1., if $\gamma = 1/2$, then we must impose $\mu = 0$ and $\kappa = 0$, leading to star-monotonicity in Assumption 1.4. Our theory still works, but no longer covers Assumption 1.4. with $\kappa > 0$, but covers monotone+Lipschitz problems. As far as we know, existing FRBS methods require monotonicity, which is consistent with our theory when choosing $\gamma = 1/2$. We will add a discussion of this special case. P3: Other Comments Or Suggestions: + Q3.1: In Corollary 3.3, line 324, why there is a in the denominator? The SAGA estimator no longer needs probability. >A3.1.: Thank you. It is indeed a typo. We will correct it. + Q3.2: A table summarization of existing work with different assumptions and convergence criteria could be provided. >A3.2: We will add a table to summarize existing works with assumptions and convergence criteria as suggested. P4: Overall Recommendation: 3: Weak accept (i.e., leaning towards accept, but could also be rejected) >A4: We hope we have addressed your concerns. We will implement what we have promised above. Given our novel methods and theoretical contributions, and the effort of addressing your concerns, your re-evaluation is highly appreciated.
Summary: 1. inspired by SVRG & SAGA, construct two variance-reduced estimators for the forward-reflect operator 2. show that VFR and VFRBS methods achieve SOTA oracle complexity for non-monotone operator splitting problems Claims And Evidence: 1. Does the convergence of your splitting algorithm have strong connection with your estimator? If I replace SVRG/SAGA with anothor variance-reduced estimators, will the algorithm converge? 2. Could you provide a table to compare with the existing literature, in term of problem setup and sample complexity? Methods And Evaluation Criteria: The authors claimed their work is to solve non-monotone, non-smooth, and large-scale problem arising from generative ML, adversarial learning, and robust learning. However, their numerical examples are limited to synthetic or toy example whose scale is very small. Theoretical Claims: I didn't check the proofs. Experimental Designs Or Analyses: 1. Solving non-monotone problem is one of the main differences compared to the existing works. However, those examples didn't explain why they are non-monotone. 2. I understand that it is hard to do theoretical comparison to those works requiring monotonicity assumption, e.g. [Cai et al., 2023]. Could you provide some empirical comparison? Since the monotonicity assumption is violated for those approaches, one can expect slow convergence or even diverge behavior. Supplementary Material: I read the synthetic WGAN example. Relation To Broader Scientific Literature: Exploring the application of operator splitting in non-monotone and stochastic setup could be useful for broader engineering problems than ML learning. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: 1. solid theoretic work Weakness: 1. very dense theory and very weak experiments 2. the motivating ML applications, e.g. GAN and adversarial training, are not the main line for generative modeling and robust deep learning. Other Comments Or Suggestions: I think this paper is too dense in technique presentation while leaving very little room for its possible applications in ML. For example, what kind of non-convex problem satisfying Assumption 1.4? Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: First of all, we highly acknowledge the comments and questions from the reviewer. Below is our detailed response. Q1. "Does the convergence ... estimator? If I replace SVRG/SAGA ... converge? >R1. The answer is "no". In lines 233-240 and 297-304, we have stated that any estimator $S^k$ satisfies Definition 2.1. can be used in our methods, covering a wide class of unbiased estimators. The convergence guarantees are stated in Theorems 3.1 and 4.1 without specifying $S^k$. Therefore, any estimator satisfying Definition 2.1. can be used in our methods to have convergence guarantees as in these theorems. SVRG and SAGA are two concrete instances of $S^k$. However, the oracle complexity depends on the specific estimator since we do need to know the cost of constructing such an estimator (See Subsections 3.2 and 4.2). Q2. "Could you provide a table to compare with the existing literature, in terms of problem setup and sample complexity?" >R2. Thank you for the request. Yes, we will provide a table summarizing existing results and comparing them with ours. Q3. "The authors claimed ... very small." >R3. Our paper primarily focuses on new algorithms, theoretical convergence, and oracle complexity rather than experiments. Our experiments are used to validate the theory and we did not specifically focus on concrete ML applications. Nevertheless, based on your suggestion, we will increase the sizes of our experiments in the revision. >Regarding nonsmooth and non-monotone problems, Supp. E2 provides nonsmooth and non-monotone problems since the underlying matrix is not necessarily positive semidefinite (non-monotone), and they have constraints (nonsmoothness). Example 2 tackles a nonlinear minimax problem, which is nontrivial compared to the quadratic minimax one in Example 1. As mentioned above, we will substantially increase the size of our experiments as you suggested. Q4: "Solving non-monotone problem ... are non-monotone." >R4: Example 1 allows $\mathbf{G}$ to be non-positive semidefinite. Therefore, it violates the condition $\langle Gx - Gy, x - y\rangle \geq 0$ (since $\langle Gx - Gy, x - y\rangle = (x-y)^T\mathbf{G}(x - y)$). This means that $G$ is non-monotone. We will clearly explain this in the revision. Q5: "I understand that it is hard to do a theoretical comparison ... even diverge behavior." >R5: The work by (Cai et al 2023) relies on Halpern's fixed-point iteration, and indeed requires stronger assumptions than ours. We will conduct numerical examples to compare both methods, at least in the co-coercivity case. We are not sure if it will diverge in the non-monotone case, but there is no theoretical result to guarantee convergence for this method under Assumption 1.4. Q6: Weakness: -- very dense theory and very weak experiments -- the motivating ML applications, e.g. GAN and adversarial training, are not the main line for generative modeling and robust deep learning. >R6.1: We agree with the reviewer that the paper is dense, and we will try to improve its presentation as suggested. It was significantly challenging for us to conduct a 10-page paper with both strong theories and experiments. Therefore, we chosed to focus on methodology and theoretical results in our submission. However, as we discussed above, additional experiments will be added to the supplementary in the revision. >R6.2: We will rephrase some inaccurate statements related to robust deep learning and generative modeling as suggested. Q7: "I think this paper is too dense ... applications in ML. For example, what kind of non-convex problem satisfies Assumption 1.4?" >R7: We will improve the presentation of our paper as suggested. We will add more experiments to the second example to handle nonconvex regularizers such as CAD, to obtain non-monotone problems. Regarding Assumption 1.4., one simple example is to consider a nonconvex quadratic problem $\min_x \frac{1}{2}x^TQx + q^Tx$, where $Q$ is symmetric and invertible, but not necessarily positive semidefinite. The optimality condition is $Qx + q = 0$. If we define $Gx = Qx + q$ and $T = 0$, then $\langle Gx - Gy, x - y\rangle = (x-y)^TQ(x-y) \geq \lambda_{\min}(Q^{-1})\Vert Q(x-y)\Vert^2 = \lambda_{\min}(Q^{-1})\Vert Gx - Gy\Vert^2$. Hence, if $\lambda_{\min}(Q^{-1}) < 0$, then it satisfies Assumption 1.4. with $\rho := -\lambda_{\min}(Q^{-1}) > 0$. Note that one can verify that our examples in Section 5 and SupDoc. E satisfy Assumption 1.4. We will add a paragraph to explain this case. Q8: Overall Recommendation: 2: Weak reject (i.e., leaning towards reject, but could also be accepted) >R8: We hope the reviewer will read our responses above, and re-evaluate our work based on new algorithms (especially, the one for inclusion) and theoretical contributions. We promise to improve our paper and add a table for comparison, larger experiments, and a comparison to [Cai et al 2023] as requested.
Summary: This paper studies the forward-reflected operator in two types of variance-reduced estimators: SVRG and SAGA. Using these estimators, the authors propose the Variance-Reduced Forward-Reflected Method and the Variance-Reduced Forward-Reflected-Backward Splitting, which solve nonlinear and generalized equations, respectively. Through Lyapunov analysis, the authors establish an complexity of $O(n+n^{2/3}\epsilon^{-2})$ to obtain $\epsilon$-residual error, matching the best-known results. Theoretical findings are further validated through numerical experiments. Claims And Evidence: The authors state that their results match the best known results. I request that authors verify this by providing a table or a comparison with prior works in terms of complexity and assumptions. This is necessary and would enhance the novelty and clarity. Methods And Evaluation Criteria: Yes, methods and evaluation criteria makes sense. Theoretical Claims: I briefly reviewed the proofs in both the main text and the supplementary material. Experimental Designs Or Analyses: Analyses and experiments look sound. Supplementary Material: I briefly reviewed the supplementary proof. It provides complete proof of theorems in the main text. Relation To Broader Scientific Literature: As the authors introduced, this work falls within the literature on stochastic approximation and nonmonotone inclusion problems. Essential References Not Discussed: I believe this papers address prior works. Other Strengths And Weaknesses: This paper systematically studies variance-reduced estimators, SVRG and SAGA estimators of forward-reflected operator and proposes new algorithms with theoretical complexity matched to the best known results. The paper is well organized, and the experiments demonstrate the theoretical results. Furthermore, it shows that proposed algorithms perform better than ones in prior work. I believe this work provides a valid contribution to the literature on stochastic approximation and nonmonotone inclusion problems, since, although their theoretical results do not improve upon prior work, this paper introduces an interesting new operator with symmetrical variance reduction analyses and demonstrates its practical potential. Other Comments Or Suggestions: Please refer to the Questions for Authors section. Questions For Authors: Do the complexities of the proposed methods in this paper indeed match the best-known results? (As I wrote in Claims and Evidence section, please make table or a comparison with prior works in terms of complexity and assumptions.) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: First of all, we highly acknowledge the reviewer for constructive comments and feedback. Below is our response to each point. Q1: The authors ... results. I request that authors verify this by providing a table or a comparison with prior works in terms of complexity and assumptions. This is necessary and would enhance the novelty and clarity. >R1: Thank you! Yes, we will provide a table to summarize and compare our work with existing results. We will also highlight our novelty using this summarized table as suggested. Q2: This paper systematically studies variance-reduced estimators, SVRG and SAGA estimators of forward-reflected operator and proposes new algorithms with theoretical complexity matched to the best known results. The paper is well organized, and the experiments demonstrate the theoretical results. Furthermore, it shows that proposed algorithms perform better than ones in prior work. I believe this work provides a valid contribution to the literature on stochastic approximation and nonmonotone inclusion problems, since, although their theoretical results do not improve upon prior work, this paper introduces an interesting new operator with symmetrical variance reduction analyses and demonstrates its practical potential. >R2: We appreciate your adequate summary of our paper and contribution. The complexity we achieved in this paper is similar to the one in variance-reduced methods for nonconvex optimization using SVRG and SAGA, and this complexity is unimprovable without using enhancement tricks such as restart or nested loop. Q3: Do the complexities of the proposed methods in this paper indeed match the best-known results? >R3: To our best knowledge, our oracle complexity results are the best-known so far for non-monotone problems (i.e., covered by Assumptions 1.3-1.4) using SVRG and SAGA. We will add a table to compare as suggested. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. I will maintain my score
Summary: In this paper, the author proposes two novel algorithms for solving a class of non-monotone equations, building upon the Forward-Reflected Backward Splitting framework and incorporating variance reduction techniques such as SVRG and SAGA. The proposed methods are accompanied by rigorous convergence guarantees and achieve state-of-the-art complexity bounds. To validate the theoretical results, the authors conduct a series of numerical experiments demonstrating the practical effectiveness of the algorithms. Claims And Evidence: The claims in the submission are well-supported by rigorous theoretical analysis and empirical validation, with no evident inconsistencies. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are appropriate and well-aligned with the problem and application context. Theoretical Claims: Yes, I reviewed the proofs and identified issues related to the assumptions and justification of some theoretical claims. Experimental Designs Or Analyses: I find the experiments to be well-designed and informative, with no apparent issues in their methodology or analysis. Supplementary Material: I have briefly reviewed the appendix. Relation To Broader Scientific Literature: The proposed approach and novel algorithms can be helpful for optimization community to create new algorithms or can be useful for practical tasks. Essential References Not Discussed: No, all essential related works appear to be appropriately cited and discussed. Other Strengths And Weaknesses: The manuscript is well-composed, and the results are presented in a clear and coherent manner. Other Comments Or Suggestions: On lines 140–142 of the second column, please ensure that the citation of “Diakonikolas et al.” is formatted correctly in accordance with the appropriate referencing style. Questions For Authors: I do not have any questions to raise at this point. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: First of all, we highly acknowledge the reviewer for comments and feedback. Below is our response to each point. C1: In this paper, the author proposes two novel algorithms for solving a class of non-monotone equations, building upon the Forward-Reflected Backward Splitting framework and incorporating variance reduction techniques such as SVRG and SAGA. The proposed methods are accompanied by rigorous convergence guarantees and achieve state-of-the-art complexity bounds. To validate the theoretical results, the authors conduct a series of numerical experiments demonstrating the practical effectiveness of the algorithms. >R1: Thank you for a great summary of our work and contribution. Unlike existing variance-reduction methods in the literature which primarily address co-coercive or monotone problems, we instead consider a class of problems satisfying a weak-Minty condition, which is possibly non-monotone. Moreover, our methods are new and significantly different from the existing ones due to the use of $S^k$. The complexity $\mathcal{O}(n + n^{2/3}/\epsilon)$ is the best-known (but not optimal) for SVRG and SAGA in nonconvex optimization (without using enhancement tricks, like researt or nested loops). C2: On lines 140–142 of the second column, please ensure that the citation of “Diakonikolas et al.” is formatted correctly in accordance with the appropriate referencing style. >R1: Thank you. We will correct the issue you identified.
null
null
null
null
null
null
Whoever Started the interference Should End It: Guiding Data-Free Model Merging via Task Vectors
Accept (poster)
Summary: The paper offers very satisfying results and method in a nice writeup that could be communicated much better. Claims And Evidence: You refer to Task Arithmetic as data-free which is mostly true, except that it reweights the models with data. Classic merging does not and Ties also has a scaling factor but it is less sensitive to it so the default is more likely to be just fine. Methods And Evaluation Criteria: The paper mainly focuses on multitask learning (not generalization, new capabilities etc.) somewhat awkwardly it focuses on task arithmetic quite heavily although this is not even the idea behind task arithmetic (there the idea is compositionality of tasks not orthogonality and lack of interference which are closer to the first merging works, to ties, to https://proceedings.neurips.cc/paper_files/paper/2023/file/d28077e5ff52034cd35b4aa15320caea-Paper-Conference.pdf etc.). Theoretical Claims: There are proofs (and I commented also on some things that were unclear to me). In general, this takes a meaningful part of the paper, but I don't feel this is a main contribution of the paper. Experimental Designs Or Analyses: The baselines on NLP are outdated, RoBERTa is a better hyperparameter tuned (slightly larger) BERT, the first model to pretrain (parallel to Elmo), a lot has changed since. Beyond this, the method performs extremely well, it is compared with a lot of baselines and (while for some reason not highlighted) almost reaches multitask learning, all without data, this is a massive result. Supplementary Material: Mainly the one with the graph, not the equations Relation To Broader Scientific Literature: The paper states well its relations to previous merging methods. It explains well what are the main changed and difficulties in preexisting methods and covers many baselines. I do wonder how does this method work on LoRA and or whether KnoTS would be irrelevant here or will prealigning be still useful even thought you learn all the weights, but it is just something I am curious about and not a strong lack the paper has. You mention not using external data, and that others do not, but other methods do (especially task arithmetic). Usually, they use a validation set to tweak some sizes. (noteworthy and very recent is arxiv.org/pdf/2310.02575 in that regard) It is unclear how exactly did you run those methods and what you did yourself (did you use some validation set? where?). Also recent, but not as recent as the above is KnoTS, which I wonder if, because you are assuming all kinds of linear stuff is further improving your method like it does to ties. i.e. is it complementary or conflicting. Essential References Not Discussed: The linearity claims remind me of works like https://arxiv.org/abs/2303.09435 https://arxiv.org/abs/2303.08112 from interpretability and of works like https://arxiv.org/abs/2407.15845 on reconstructing data from weights or of memebership attacks. The claim in 140 should be referred to https://arxiv.org/abs/2302.04863, this is the paper that deals with this (while Task arithmetic is doing many nice things, this is not the main source of knowledge regarding that claim). Maybe useful to cite the ones proposing model merging when you first use the term? I think you did cite (most?) of them somewhere else (https://arxiv.org/abs/2203.05482 https://arxiv.org/abs/2111.09832 https://arxiv.org/abs/2204.03044 ) The moerging lit. is not essential, but it is quite lacking (maybe arrow and phatgoose are the clearest cases to cite? there is also a survey if it helps (https://arxiv.org/abs/2408.07057) Other Strengths And Weaknesses: The paper is generally clear, but there are many small missing pieces that are left for the reader to reinvent, this makes following the paper very hard. I tried to highlight those cases (what is an input, what distribution are we discussing etc.) and explain where is the confusion, but at the current state it looks really well when skimming but makes a really hard life for following along. It is unclear from the figure\intro what do you mean as input vector. for an LLM there is the overall input for all of the training (but it is not a vector), the input to a given layer or the input to the whole network (looks like the second). I do not understand the claim in l140 "This directional consistency suggests that task vectors converge to local optima". Local and not global? (that is obvious) optima and not just anywhere? Well this what makes convergence. Are you trying to say the local optima is the same and unique? This is only partially true (see Guetta et al. 2023 I also shared in the related work section). Moreover, the fact that models converge to similar solutions doesn't mean that the differences between them are noise (interference they are for sure not, as the literature thinks of interference as places where the different Task Vectors ruin for each other, regardless if it is good for the original task). By the definition of the delta, it seems like you are thinking of the interference as *any* change from the task vector, this is obviously not true, the whole point of task arithmetic was that you can "add" things together, merging methods also show cases where you can get disentangled vectors that do not interfere (although they do change) to some extent. Other Comments Or Suggestions: constitude-> constitute "Crucially, since the learning rate and total optimization steps always remain constrained during standard fine-tuning procedures, the corresponding inputs for individual samples remain consistent across successive update iterations." Can't parse this sentence, the inputs are consistent? In what sense? l61 indicate"s"? " As shown in Figure A.3" A.3 is an App.~ not a figure. Also the appendix has only subsections and no sections? why? l137 (right col) what is p? probability over the distribution of inputs of the training data in i? all tasks? something else? l160 (right) "lemma1" missing space same sentence "a individual" wrong det. l179 "highly consistency" l194 eq.1 missing space l198 why does preposition has a . instead of a space? (there are many typos, you should pass over it, grammarly might find some of them as well) Also in Theorem.1, if the thing is that you want space and don't want it to jump lines, use ~ instead of a regular space (it is a space that counts as a regular character and not a word boundary) l243 "the" unnecessary? l256 (right) \citet not \cite or \citep when using the citation as part of the sentence. l262 right I am not sure I would call it analysis, maybe just evaluation or performance? We don't learn anything, just compare. What are the red delta numbers in the tables? Difference from what? Questions For Authors: In your method you relearn all of the weights, so in some essence you have no reason to fall in any convex hull of the original models (like AdaMerge for example). Is that an important feature of the method? Do you think that is the main gain? More understanding of what this optimization procedure actually finds might be illuminating (both on what should be still improved, and what is being done here successfully). Is the choice of RoBERTa related to what you could train yourselves? because if so, maybe this can help? https://huggingface.co/Lots-of-LoRAs this is quite a large set of LORA models all trained similarly, if you use it because the method doesn't work with LORAs, this is something you should state much more clearly. What methods do you see as addressing the least similar problem to yours? Your method starts from a merged model, what if you started from a better merged model (of a dissimilar method?) would that result in another boost in performance? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate the reviewers’ valuable feedback and have addressed each point as follows: ### Concern 1: The Use of Validation Data 1. **Whether task arithmetic is data-free**: We summarize TA as a data-free method as it can select empirical parameters for merging without requiring data. Although it can be improved by seaching rescaling coefficient, it is reasonable to summarize TA as a data-free method. 2. **How we run other methods**: We run other methods in the same way as described in their paper and report the results accordingly in the paper if the experimental settings are same. 3. **Whether we use validation data: Our proposed method is entirely data-free**, and did not require any validation data. This is because our approach is rescale-free and solely relies on our data-free loss. ### Concern 2: Focus on Multi-Task Learning Our work focuses on model merging rather than model editing. In current model merging field, multi-task learning is the primary goal to pursue [1,2,3,4,6,7], where TA and task vector is a foundamental concept for analysis the interference and conflict[1,2,6] . ### Concern 3: Related References 1. **Linearity claims**: Our work focued on the linear layer but not linearizing the whole model. Therefore, these linearization works are not highly relevant to us. 2. **Reconstructing data**:Our work used the reconstruction as an intermediate step in our derivation and did not use reconstruction data in our method. Therefore, the data reconstruction attacks works methoded are not highly relevant to us. 3. **Other works**: We appreciate the references you provided and will incorporate their discussion in our revision. The conclusions and methods in these works do not significantly overlap with ours. Therefore, we consider them to be "Related References Not Discussed" rather than "Essential References Not Discussed" ### W1: The Claim of Local Optima and Definition of Interference Vector 1. **The Claim of local optima** What we want to express is that the task vectors for same task are at similar positions, which indicated that the models are fine-tuned to convergence, so we think they are optimized to local optimal. But these task vectors may not be the optimal solution in the entire parameter space, so using local optimal is a more rigorous statement. Also we have not stated that this local optimal is unique, and this has no significant connection with our method. 2. **The Definition of Interference Vector: We do not claim that any changes to task vector constitute interference**, we define the interference as the difference between outputs of merged model and expert model, which is composed of $\delta x$, if $\delta$ is orthogonal to $x$, then $\delta$ is not cause interference. Similar concepts are adopted in methods like Eq.7 in Alpha-Edit [5] and Eq.1 in Regmean[4]. Furthermore, the success of TA and other methods doesn‘t mean "you can get disentangled vectors that do not interfere" since the current model merging method is far from allowing model merging without reducing any performance. ### Q1: Convex Hull Not falling in convex hull is not unique to our method, as other methods do not constrain the sum of rescaling coefficients equal to 1. The reason why our method effective compared to previous data-free methods is our method implicitly leverages input data information via task vectors. ### Q2: The Choice of RoBERTa and LoRA Results We selected RoBERTa because it's a commonly used benchmark, as used in works such as [3]. We further provide results for other widely adopted LoRA benchmarks [1][2][3]. Due to character limitation, you can view the results in **Tab1.1 and Tab1.2 in our rebuttal to reviewer VcLV**. ### Q3: The Least Similar Problem to Our Work 1. **Least Similar Problem** Most existing methods focus on reducing the interference and conflicts [1,2,3,4,6,7], so it's hard to say what is the least similar problem to our work. 2. **Combine with other methods** Since our objective is a simple optimization problem, we think different initialization will only provide limited influence. The result in Tab.3.1 demonstrated that using different methods as initialization yields similar results. **Tab 3.1** Results of using different methods as initialization |Method|Ours|Ties+Ours|Adamerging+Ours| |-|-|-|-| |Acc.|85.2|84.9|85.0| --- We thank the reviewer again for their valuable feedback and hope our detailed responses address the concerns. We look forward to further discussion. [1] TIES-Merging: Resolving Interference When Merging Models [2] Parameter Competition Balancing for Model Merging [3] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging [4] Dataless Knowledge Fusion by Merging Weights of Language Models [5] AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models [6] Task Singular Vectors: Reducing Task Interference in Model Merging [7] DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling
Summary: This paper introduces WUDI-merging, a new data-free model merging method. The authors provide theory-backed idea that task vectors for a linear layer represent a linear subspace corresponding to its inputs. They use this knowledge to construct a merging method that aims to minimize the inference of the merged model's weights on their corresponding tasks. They also introduce a per-task weighting corresponding to the Frobenius norm of the task vector. When compared to other methods on a variety of benchmarks, WUDI-merge obtains a signficantly higher normalized average multi-task score than other data-free merging baselines and even test-time adaptation methods. Claims And Evidence: The claims are supported by evidence. The core observation, namely that task vectors for a linear layer represent a linear subspace corresponding to its inputs, and the derivation of the optimization problem are backed by theory. WUDI-merging is compared to a variety of data-free merging methods and TTA methods on a variety of benchmarks, where it achieves SoTA performance. Sufficient ablation studies are conducted on their method. One potential caveat, however, is that experiments are missing that explicitly test some of the theoretical results in the paper. This would include seeing how close the inputs to a linear layer are to the subspace corresponding to its task vector (i.e. experimental exploration of the reconstruction error in equation (13)). The theoretical analysis, while sound, involves the use constants bounding the norm of gradients, so the bound on the error might be large if those constants are large. Since the main purpose of these theoretical results was to motivate the construction of WUDI-merging, this does not significantly detract from the main contribution of this paper. Methods And Evaluation Criteria: Evaluation of average normalized accuracy on multi-way merges for vision models, discriminative text models, and generative text models makes sense for evaluation of a merging method. Theoretical Claims: I skimmed the proofs and did not find any issues. However, I did not examine them in detail. Experimental Designs Or Analyses: I didn't review anything in detail, but the experimental designs and analysis makes sense to me. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The paper categories existing merging methods into data-free, test-time adaptation, and MOE-like merging. The differences and advantages/disadvantages of these methods are explained, and the categorization of WUDI-merging as a data-free method makes sense. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: Strengths: - Theoretical results are used to motivate the method. - Method is non-trivial and novel. - Method is data-free and relatively computational cheap. - Writing is clear. Weaknesses: - Computation times and memory requirements are not provided for baseline merging methods. Other Comments Or Suggestions: Typos: - line 175, the "Where" should not be capitalized - Table 2, "date-free" should be "data-free" - line 994: "that he task vector" should be "that the task vector" Questions For Authors: In the "Selection of Linear Subspace" paragraph of section 4.4: 1. How exactly are a combination of task vectors and random vectors used to optimize the loss? Do you replace some neurons in the task vector with random vectors and run WUDI-merging as normal? 2. How are the random vectors selected? It would be good to include this information in at least the appendix. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewers’ valuable feedback and have addressed each point as follows: ### W1: Reconstruction Error For calcuating the reconstruction error in Equation (13), we first obtain the input for each layer from a set of samples and then compute the reconstruction coefficients using the least squares method with its corresponding task vector. The reconstructed vector, $x_{\text{recon}}$, is derived using these coefficients along with the task vector. Then we calculate the Relative Reconstruction Error (RRE) for the sample from task $i$ as follows: $ \text{Relative Reconstruction Error (RRE)} = \frac{\|x - x_{\text{recon}}\|}{\|x\|}, \quad \text{where} \quad x_{\text{recon}} =\tau_i\(\tau_i^T\tau_i)^{-1}\\tau_i^T x . $ Table 2.1 shows the result of different layers and tasks. which demonstrate that the relative reconstruction errors across different layers are extremely small, which further validates our theoretical analysis. **Table 2.1**: Reconstruction Error Results on eight tasks of different layers | Task | SUN397 | Cars | RESISC45 | EuroSAT | SVHN | GTSRB | MNIST | DTD | |---------|----------|----------|----------|---------|---------|---------|---------|---------| | Layer 1 | 1.3e-5 | 1.3e-5 | 1.3e-5 | 3.3e-3 | 7.1e-3 | 3.6e-3 | 5.8e-3 | 1.3e-5 | | Layer 3 | 1.1e-5 | 1.1e-5 | 1.2e-5 | 1.4e-5 | 1.6e-5 | 1.3e-5 | 1.3e-5 | 1.1e-5 | | Layer 6 | 1.0e-5 | 1.1e-5 | 1.1e-5 | 1.3e-5 | 1.2e-5 | 1.2e-5 | 1.2e-5 | 1.1e-5 | | Layer 12| 9.8e-6 | 1.1e-5 | 1.2e-5 | 1.1e-5 | 1.2e-5 | 1.1e-5 | 1.2e-5 | 1.0e-5 | --- ### W2: Resource Consumption Comparison We report the computational time and GPU memory usage of different method on ViT-B-32 tasks. In comparison to the Adamerging method, our approach not only improves performance but also significantly reduces computational cost. The details are summarized in the following table: **Table 2.2**: Detailed computational time and gpu memory requirements on ViT-B-32 tasks. | Method | Accuracy (%) | Time | GPU Memory (GB) | |-------------------------------|--------------|----------|-----------------| | Ties Merging | 72.4 | 4 s | 0 | | Adamerging | 81.1 | 127 min | 17.1 | | WUDI-Merging-CFS (CPU) | 84.4 | 5 s | 0 | | WUDI-Merging-CFS (GPU) | 84.4 | 2 s | 1.8 | | **WUDI-Merging** | **85.2** | 1 min 54 s | 4.0 | ### Q1 & Q2: How to Use Random Vectors or Subsets of the Task Vector for Optimization **For the subset of task vectors**, we randomly sample a subvector from the original task vector as follows: $\tau^{\text{sub}}_i = \tau_i [\text{rand\\_index}, :]$ The corresponding loss is computed as: $ \mathcal{L}_{\text{sub}} = \sum\_{i=1}^{n} \frac{1}{\|\tau_i\|\_F^2}\ \delta_i(\tau^{\text{sub}}\_i)^\top = \sum\_{i=1}^{n} \frac{1}{\|\tau_i\|_F^2}\ (\tau_m - \tau_i) (\tau^{\text{sub}}_i)^\top$ **For random vectors**, we sample from a Gaussian distribution whose mean and standard deviation are computed from the original task vectors: $ \tau^{\text{random}}_i \sim \mathcal{N}(\mu_i, \sigma_i^2), \quad \text{where} \quad \mu_i = \text{mean}(\tau_i) \quad \text{and} \quad \sigma_i = \text{std}(\tau_i) $ The loss in this case is given by: $ \mathcal{L}_{\text{random}} = \sum\_{i=1}^{n} \frac{1}{\|\tau_i\|_F^2}\delta_i(\tau^{\text{random}}_i)^\top = \sum\_{i=1}^{n} \frac{1}{\|\tau_i\|_F^2}(\tau_m - \tau_i)(\tau^{\text{random}}_i)^\top $ **All reported results are averaged over 5 sampling runs**. We thank the reviewers for their suggestions and will include these additional details in the revision. --- We thank the reviewer again for their constructive feedback and hope that our detailed responses address the concerns. We look forward to further discussion.
Summary: This paper proposes WUDI-Merging, a data-free model merging method where the merged model weights are optimized via SGD using the Adam optimizer. The optimization objective leverages the insight that task vectors form an approximate linear subspace of the corresponding input space. Additionally, the authors provide a closed-form alternative (WUDI-Merging-CFS) for scenarios with limited computational resources. Empirical evaluations demonstrate that WUDI-Merging achieves state-of-the-art performance across vision and NLP tasks and is rescaling-free. ## update after rebuttal I would like to thank the authors for the rebuttal. While I think the performance of WUDI is impressive and the new results on Flan-T5 and Ewen LoRAs are reassuring, I believe authors should make a huge effort for improving the presentation of the paper. As I have read authors' responses to also other reviewers, I found some of the responses were not very informative or persuasive. For example, as for the linear claims, I think using Lipschitz constant to explain it is too superficial. Also, Reviewer 7sbW's questions about how authors run other methods that require data are valid, but not answered. WUDI would be a great addition to the community if these problems are addressed and I strongly encourage the authors to handle the requests properly. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I have reviewed the theoretical claims in the main paper. Lemma 1 regards input consistency, Proposition 1 approximates task vectors as a linear combination of inputs, and Theorem 1 provides an upper bound on interference. The proofs appear to be correct. Experimental Designs Or Analyses: Both vision and language model merging experiments follow the standard designs of previous model merging work. The analyses in Section 4.2, 4.4 are sound and valid. Supplementary Material: I've skimmed the appendix, including proofs, experiment details, and numerical evidence. Relation To Broader Scientific Literature: This work may inspire researchers to further explore SGD-based optimization strategies for data-free model merging, potentially shifting the focus from traditional closed-form solutions or heuristic-based approaches. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Strengths - The optimization objective is theoretically motivated and can be efficiently approached using Adam within the current framework (e.g., PyTorch). Corresponding computation cost is discussed. - An alternative solution, WUDI-Merging-CFS, is also provided for scenarios with limited GPU resources, along with a sensitivity analysis on regularization coefficient. - WUDI-Merging is rescaling-free, as demonstrated in Figure 4 (b). - A comprehensive benchmark evaluation on both vision and language models support the superiority claims of proposed method. - The experiments are well-designed and analysis provided further understanding into WUDI-Merging. ### Weaknesses - The experiment for generative language models is limited to merging three Llama2 models, which is relatively insufficient. Expanding the evaluation to include merging *more models*, and *more model architectures* (e.g., encoder-decoder, decoder-only) would provide a more comprehensive understanding of the general applicability and potential limitations of WUDI. - Experimental study of "Input consistency" and results of model merging performance are based on fully fine-tuned models. The paper does not explore scenarios where task vectors are derived from PEFT methods (e.g., LoRA). Further analysis on this would provide a clearer understanding of method's applicability. - The potential limitations of proposed method are not well-discussed. Other Comments Or Suggestions: - The analysis stems from the assumption/literature observation that "the task vector in the linear layer encapsulates most of the capabilities of the expert models. As shown in Figure A.3, an expert model utilizing only the task vector of the linear layer achieves performance comparable to that of the full expert model. Therefore, we primarily focus on the linear layers of the model." I assume this may be model-dependent observation and thus I am particularly interested in the case when using only the task vector of the linear layers is not enough? Will WUDI be less effective in those cases? - While WUDI-Merging is described as hyperparameter-efficient, it would be helpful to provide a more detailed discussion on how different learning rates or optimization settings affect the quality of the merged model. - Please proofread the manuscript once again as there are several typos/mistake throughout. e.g. line 133 "linear linear", line 993 "magnitude still very small", etc. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewers’ valuable feedback and have addressed each point as follows: ---- ### W1: Results on More Models and LoRA To further demonstrate the generalizability of our method on different models and LoRA, we supplemented the experiments on Flan-T5-base and Qwen-14B. For merging LoRA, We first restore BA back into the original matrix $(\tau_i=B_iA_i)$ , then apply WUDI-Merging directly to $\tau_i$ to obtain $\tau_m$, then merging it into $\theta_{base}$. The experimental results obtained from merging Flan-T5-base (LoRA fine-tuned) models and Qwen-14B (LoRA fine-tuned) are shown in the table below: **Tab 1.1:** Experimental results of merging Flan-T5-base (LoRA fine-tuned) models on all eight tasks. | Method| CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST2 | STSB | Avg. | |-----|----|---|----|----|---|----|------|------|--| | **Individual** | 69.1 | 82.7 | 85.5 | 90.9 | 84.0 | 84.4 | 92.9 | 87.4 | 84.6 | | **Ties-Merging** | 68.3 | 56.3 | 79.4 | 89.8 | 83.7 | 79.4 | 91.6 | 71.2 | 77.5 | | **AdaMerging++** | 69.1 | 60.3 | 78.4 | 90.0 | 83.6 | 79.1 | 91.6 | 74.1 | 78.3 | | **WUDI-Merging(Ours)** | 68.6 | 79.0 | 77.7 | 87.2 | 83.1 | 75.8 | 93.2 | 85.0 | **81.2(+2.9)** | **Tab 1.2:** Experimental results of merging Qwen-14B (LoRA fine-tuned) models on all four tasks. | Method | MMLU | TruthfulQA | BBQ | CNN-DailyMail | Avg. | |---|---|--|---|---|---| | **Individual**| 68.35|53.34|93.53|19.46|58.67| | **Task Arithmetic** | 67.56 | 52.33 | 78.38 | 20.54 | 54.70 | | **Ties-Merging (w/ DARE)** | 69.38 | 52.03 | 81.06 | 15.91 | 54.62 | | **WUDI-Merging(Ours)**| 69.17 | 55.71 | 80.56 | 17.33 | **55.69(+0.99)** | While there is a slight degradation relative to individual models, our method demonstrates SOTA performance on merging LoRA fine-tuned models. ---- ### W2: Potential Limitation We think that merging heterogeneous models may be the potential Limitation, as such models might not adhere to the assumptions underlying WUDI-Merging and current merging methods. Addressing these challenges is a key direction for our future research. ---- ### Q1: On Using Only the Task Vector of the Linear Layers In the context of homogeneous model merging, we think that **"using only the task vector derived from the linear layers is usually enough"**. Removing the nonlinear task vector from the fine-tuned model can be viewed as applying a small offset to a limited number of parameters. For a $C$-Lipschitz continuous model, the change in the output is bounded by: $ ||f_{\theta+\tau_i}(x) - f_{\theta+\tau_i^{linear}}(x)||\le C·||\theta+\tau_i - \theta+\tau_i^{linear}|| = C·||\tau_i^{non-linear}|| $ Considering that the parameters of the nonlinear layer accounting for only a small fraction (Qwen-14B$\approx$0.007%, LLama3.1-8B$\approx$0.003%) and has a small offset. The affect brought by only using the task vector of the linear layer is small. Therefore, we think that using only the task vector derived from the linear layers is usually enough. In all our other experiments, we found that utilizing only the task vector of the linear layer achieves performance comparable to that of the full expert model, which also confirms this point. However, if the parameter's offset are large,our method may have limitations in use, which is also a major limitation in the current field of model merging [1]. Current model merging methods need to limit the change of fine-tuned expert models to base model in a relatively small range. We will explore this limitation in our future work. ---- ### Q2: Results on Different Optimization Strategies We evaluate the effects of applying different optimizers and learning rates in our method, which is shown in Tab 1.3 and Tab 1.4. Tab 1.3 shows that Adam (85.2%) and SGD (85.1%) achieve similar performance, which suggest that our method is not sensitive to kind of the optimizers. Tab 1.4 demonstrate that a smaller learning rate can better ensure the stability of optimization and enhance the performance of results. **Tab 1.3:** Experimental results of applying different optimizers in our method, the average accuracy of ViT-B-32 are reported. | | Adam|SGD| |-|-|-| |Acc.| 85.2| 85.1 | **Tab 1.4:** Experimental results of applying different learning rate in our method, the average accuracy of ViT-B-32 are reported. | 1e-5|1e-4|1e-3|1e-2|Ada-Merging| |-|-|-|-|-| | 85.2 | 84.3 | 83.9 | 83.5| 80.9 | ---- ### Q3: Typo We appreciate the reviewers' attention to detail and will correct the identified typographical errors in the final revision. --- We thank the reviewer again for their constructive feedback and hope that our detailed responses address the concerns. We look forward to further discussion. [1] Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
null
null
null
null
null
null
null
null
Nonlinear transformers can perform inference-time feature learning
Accept (poster)
Summary: This paper studies the in-context learning capacities of transformers when the prompt sequences are given by a (possibly low-dimensional) Gaussian single-index model. When the length of the prompt sequences exceed certain (information-theoretic) limit, transformer trained with modified gradient descent method is shown to achieve vanishing in-context prediction error. Claims And Evidence: N/A Methods And Evaluation Criteria: N/A Theoretical Claims: I generally feel that the theoretical results in this paper, to some extent, illustrate the in-context learning capabilities of transformers. The mechanism of how the learned transformer performs feature learning in single-index model is well-explained. However, I have concerns about the “gradient-based training” (Algorithm 1) analyzed in this work, particularly given the setting under consideration: (1) The K and Q matrices (as well as the V matrix and the matrix from the MLP layer) are merged into a single matrix, with certain coordinates fixed at 0 or 1 (i.e., they are not trained). (2) In the analysis of optimization, the weight matrices of the attention layer are trained using only a single step of gradient descent. Given that the current analysis is already quite complex, it is unclear whether these simplifications are essential to the theoretical claims. In particular, simplification (1) ensures that the output takes the exact form of Equation (3.2), which computes a weighted average over the labels $y_i$ before applying the nonlinear MLP. Additionally, the use of a single gradient descent step suggests the analysis of optimization operates in the NTK regime, which is a clear simplification for theoretical tractability. Overall, I believe the paper would benefit from clearer explanations and a more detailed justification of this gap. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: In-context learning capability of transformers (the backbone of LLMs) is an important topic of both theoretical and practical relevance. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: It is a bit confusing that the dependency on $\log^{\mathrm{deg}(\sigma)}(d)$ is hidden in the complexity bounds. Questions For Authors: As I understand it, the techniques developed in this paper (and prior works) are specific to the Gaussian single-index model. To what extent can they be extended beyond this setting, such as by relaxing the Gaussian assumption? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. We address the technical comments and questions below. **On Algorithm 1** We make the following remarks on Algorithm 1. - The strategy of merging attention matrices and zeroing out some submatrices is extensively used in theoretical analysis of both linear and nonlinear transformers [Ahn et al. 2023; Mahankali et al. 2023; Kim and Suzuki 2024a; Oko et al. 2024b] – even with such simplifications, theories can reproduce fruitful insights into attention modules. We also empirically demonstrated in [[link]](https://osf.io/p87yc?view_only=be342dc212bb49cb8f72309fdc37d376) (see also our response to reviewer 7xsp for details) that pretraining GPT2 model can achieve an test-time sample complexity that goes beyond the CSQ lower bound. We believe this suggests that mechanisms similar to our theoretical construction may be present in more realistic settings. - Our layer-wise training procedure is inspired by recent works in the feature learning theory literature, which simplifies statistical and optimization analyses. In particular, a single gradient descent step (Stage I) has been used to model the early phase of gradient-based training on neural networks [[Ba et al. 2022]](https://openreview.net/forum?id=akddwRG6EGi) [[Damian et al. 2022]](https://proceedings.mlr.press/v178/damian22a/damian22a.pdf) and transformers [Oko et al. 2024b], during which the first-layer parameters align with the feature vector of the single-index target and escape saddle points in the nonconvex landscape. The layer-wise training strategy – dividing the algorithm into two stages, one gradient descent step on the inner layer and ridge regression for the outer layer – is also common in these prior works, where the lower layer captures the relevant feature, while the top layer (+ reinitialized bias units) approximates the link function. **On the NTK Regime** We would like to clarify that our analysis is *not* in the NTK/lazy regime. Quite the contrary, the one-step feature learning paradigm has been proposed to establish statistical separations and superiority over the NTK, due to the adaptivity to low-dimensional structure – see [Ba et al. 2022][Damian et al. 2022] for discussion. **Beyond Gaussian Single-index Models** Thank you for the suggestion. Extending our analysis beyond Gaussian single-index models is definitely an interesting direction. We make the following remarks. - In terms of data distribution, recent works have established universality results for single-index learning beyond Gaussian data, see [[Zweig et al. 2023]](https://openreview.net/forum?id=JkmvrheMe7), by utilizing the orthonormal basis functions of the data distribution or CLT-type arguments. - In terms of target functions, a natural extension would be multi-index functions such as XOR/parity, or more interestingly, hierarchical functions that require a deeper model to express. We intend to investigate such extensions as future work. We would be happy to clarify any concerns or answer any questions that may come up during the discussion period.
Summary: This paper studies the tasking of learning single-index models $y = \sigma(x \cdot \beta)$ using a two-layer single-head softmax transformer. The authors prove that a pretrained transformer can solve this task in context (different $\beta$ in different prompts). When $\beta$ is sampled from the unit sphere in an $r$-dimensional subspace of $\mathbb{R}^n$, the authors provide the following bounds on the sample complexity ($\mathrm{IE}$ and $\mathrm{GE}$ stand for the information and generative exponents of the link function $\sigma$, respectively): * For pretraining, the number of prompts is $\tilde{O}( r^2 d^{\mathrm{IE}+2} \vee r^{1.5 \mathrm{GE}} )$ and the length/number of samples in each prompt is $\tilde{O}( r^{1.5 \mathrm{GE}} )$. * For inference, the length/number of samples in the prompt is $\tilde{O}( r^{1.5 \mathrm{GE}} )$, which matches the SQ lower bound up to a multiplicative constant in the exponent. The pretraining algorithm is a rather standard (standard in this area instead of in practice): they first train the attention layer using one large step of GD and then train the output layer using ridge regression. They show that after the first stage, the attention layer can recover the relevant $r$-dimensional subspace, and after the second stage, the model learns to reconstruct the link function using samples. ## update after rebuttal I thank the authors for the response. I agree that while the correlation can be easily extracted from the attention in a linear way, leveraging the nonlinearity to improve the sample complexity is non-trivial. Hence, I'll keep my score (4). Claims And Evidence: This is a theoretical paper and the authors provide complete proofs for their claims. They also include some synthetic experiments to support their claims. Methods And Evaluation Criteria: Yes. (In-context) sample complexity is a standard metric to measure an algorithm's ability to learn single-index models (in context). Theoretical Claims: I checked the proof strategy but did not check the detailed calculation. The proof looks believable to me, as its components are either standard (ridge regression on the output layer) or believable (but non-trivial) extensions of existing argument (e.g., learning the subspace using one large GD step, lower the IE to GE using monomial transformation that comes from expanding the nonlinearity). Experimental Designs Or Analyses: This is a theory paper and the synthetic experiments they provide are serviceable. Supplementary Material: I skimmed the proofs in the appendix. Relation To Broader Scientific Literature: The main contribution of this paper over [Oko et al., 2024b] is that it improves the in-context sample complexity from $r^{\Theta(\mathrm{deg})}$ to $r^{\Theta(\mathrm{ge})}$. The former can be achieved via using kernel methods (in the relevant subspace) while to get the GE bound, the model needs to recover the actual direction $\beta$. In addition, the authors use softmax attentions instead of linear attentions (which is widely used in theory as it is usually easier to analyze) and the existence of this nonlinearity leads to a bound beyond the CSQ lower bound $r^{\Theta(\mathrm{ie})}$ (which is what one will get from using algorithms such as online SGD to learn single-index models). Essential References Not Discussed: No. Other Strengths And Weaknesses: Overall, this is a well-written paper and has strong results. See the previous sections for the strengths. One potential issue of this paper is that it relies heavily on certain properties of single-index models. In particular, in this setting, the correct way to measure the correlation between two samples (in a given prompt) is exactly the standard Euclidean inner product, so the attention layer only needs to identify the relevant subspace. This is more of a coincidence instead of something universal. Other Comments Or Suggestions: Typos: * line 188 right: ... inference-time sample complexity of $\tilde{O}( r^{\Theta(\mathrm{ie}(\sigma_*))} )$ .... Should be $\tilde{O}( r^{\Theta(\mathrm{deg}(\sigma_*))} )$. * line 229 right: $\Gamma \in \mathbb{R}^m$. Should be $\Gamma \in \mathbb{R}^{d \times d}$. Questions For Authors: See the "Other Strengths And Weaknesses" section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. We will correct the typos and improve the writing of the manuscript. We address the technical points below. **Properties of single-index target** We agree with the reviewer that our current learnability result relies on the Euclidean inner product in the attention layer. We make the following remarks. - While the correlation between samples can be easily extracted from the attention, our analysis makes use of the Softmax function to transform the training data in some nonlinear fashion --- such nonlinear transformation is crucial to obtain an improved in-context sample complexity. In contrast, prior works using linear attention layers can only make use of (linear) correlation through the attention block. - We speculate that deeper architectures would be able to construct more complicated correlations between data points and further improve the in-context sample complexity (e.g., see recent works on the gradient-based learning of three-layer neural networks [[Nichani et al. 2024]](https://openreview.net/forum?id=fShubymWrc&noteId=efqPkOAyOx) [[Dandi et al. 2025]](https://arxiv.org/abs/2502.13961)), which is an important future direction. We would be happy to clarify any concerns or answer any questions that may come up during the discussion period.
Summary: This paper studies the optimization and statistical guarantees for the in-context learning of the Gaussian single-index function class using a Transformer with softmax attention. The derived inference-time sample complexity is tighter than the existing works, which indicates that pre-trained transformers can implement full statistical query (SQ) algorithms. During the proof, a mechanism of ICL is also characterized. -------------------------- ## Update after rebuttal. My concerns are addressed overall. I prefer to keep the current score and support an acceptance. Claims And Evidence: The analysis is very impressive and solid. Methods And Evaluation Criteria: N/A. This paper is mainly theoretical. Theoretical Claims: I checked the proof sketch, which looks reasonable and rigorous. Experimental Designs Or Analyses: The experimental design makes sense, although I feel the experiments are not enough. Please check the part of weaknesses. Supplementary Material: I will check the details when I have more time, but the proof sketch looks good. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is well-written, and the theoretical analysis is impressive. I would like to mention some weaknesses here. 1. Experiments are not enough. (a) Figure 6.1 only compares the kernel method and ICL, which correspond to the kernel bound and the SQ lower bound. It is better to also show the result of the CSQ bound, which shall correspond to one-pass SGD on two-layer neural networks from lines 316-318 Right. (b) It is better if you can conduct some quantitative experiments on the order of the sample complexity, even if you use a much simpler model. 2. Algorithm 1 seems quite complicated. Is it designed for the simplicity of theoretical analysis, or is there any practical training strategy that is close to Algorithm 1? Other Comments Or Suggestions: N/A Questions For Authors: 1. Can you summarize the technical novelty compared with [Oko et al. 2024b], especially about the difference in your settings (if any) and why you can have a stronger result than theirs? 2. Can you discuss some practical insights from your results? For example, what can people learn from your results to improve the training and testing performance? Oko et al. 2024b. Pretrained transformer efficiently learns low-dimensional target functions in-context. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. We address the technical comments below. **Additional experiments** Thank you for the suggestions on experiments. We have conducted an additional experiment to probe the test time sample complexity of GPT-2 models for learning single-index target functions, and demonstrate that the sample complexity surpasses the CSQ lower bound. - We pretrained GPT-2 on a degree-3 single-index model $y=He_3(\langle\beta,x\rangle)$, in the setting where $r=d$, and plotted the relationship between the ambient dimensionality $d$ and the minimal test prompt length $N_\mathrm{test}$ required to surpass given test error thresholds. - In [[link]](https://osf.io/p87yc?view_only=be342dc212bb49cb8f72309fdc37d376), we observe that the estimated inference-time sample complexity is approximately $d^{1.13}\sim d^{1.16}$, which is lower than the kernel lower bound of $d^3$ and CSQ lower bound of $d^{1.5}$ for $\mathrm{He}_3$. Note that the CSQ lower bound also suggests a statistical hardness for online SGD – see e.g. [[Damian et al. 2023]](https://openreview.net/forum?id=73XPopmbXH&noteId=eMjWYg6qko). **Clarification on Algorithm 1** We make the following remarks on Algorithm 1. - Indeed Algorithm 1 involves a few ingredients that simplify the theoretical analysis, similar to recent works in the feature learning theory literature. In particular, a single gradient descent step (Stage I) has been used to model the early phase of gradient-based training on neural networks [[Ba et al. 2022]](https://openreview.net/forum?id=akddwRG6EGi) [[Damian et al. 2022]](https://proceedings.mlr.press/v178/damian22a/damian22a.pdf) and transformers [Oko et al. 2024b], during which the first-layer parameters align with the feature vector of the single-index target and escape saddle points in the nonconvex landscape. The layer-wise training strategy – dividing the algorithm into two stages, one gradient descent step on the inner layer and ridge regression for the outer layer – is also common in these prior works, where the lower layer captures the relevant feature, while the top layer (+ reinitialized bias units) approximates the link function. - While our theoretical result applies to a specific parameterization and gradient-based learning algorithm, we empirically demonstrated in the additional experiment above that standard end-to-end training of a GPT2 model can achieve an in-context sample complexity that goes beyond the CSQ lower bound. We believe this suggests that mechanisms similar to our theoretical construction may be present in more realistic settings. **Distinctions from Oko et al. (2024b)** We will include a dedicated comparison section in the paper; here we briefly summarize the key differences. - Improved inference-time sample complexity – In Oko et al. (2024b), the derived inference-time sample complexity is $O(r^{\Theta(deg(\sigma_*))})$, which corresponds to the sample complexity of kernel methods on an $r$-dimensional subspace. In our paper, we derived an improved sample complexity of $O(r^{\Theta(gen(\sigma_*))})$ by showing that nonlinear transformers are capable of inference-time feature learning (beyond the kernel regime). - Absence of low-dimensional assumption – While Oko et al. (2024b) assumes that the dimensionality $r$ of pretraining task distribution is much smaller than the ambient dimensionality $d$, our analysis does not rely on this assumption. - Theoretical analysis – To derive the improved sample complexity $O(r^{\Theta(gen(\sigma_*))})$, we demonstrate that the nonlinear transformation applied to the output label $\{y_i\}$ reduces the information exponent of the link function $\sigma_*$ to its generative exponent. In particular, it is essential to show that the degree-$gen(\sigma_*)$ term in the Hermite expansion of the nonlinearly transformed labels (via the softmax transformation) is non-vanishing. This requires a careful tracking of the corresponding Hermite coefficients, which constitutes one of the most technically challenging parts of the analysis. This aspect is not addressed in Oko et al. (2024b), where it is only shown that the linear attention module computes correlations on the raw label $\{y_i\}$ and a fixed kernel basis $\{\phi(x_i)\}$ (constructed by a fixed MLP block). **Practical insights** Our goal is to rigorously study the statistical efficiency of ICL in a well-defined target function class (single-index models). Therefore, our analysis does not directly yield algorithmic improvements in practical settings. This being said, we believe that the mechanism that we theoretically investigate (in-context feature learning) serves as a foundation of future studies on the capabilities and limitations of ICL compared to other algorithms that act on the test prompt. We would be happy to clarify any concerns or answer any questions that may come up during the discussion period. --- Rebuttal Comment 1.1: Comment: Thank you for the response. My concerns are addressed overall. I prefer to keep the current score and support an acceptance.
Summary: This work studies in-context learning (ICL) of single-index models $\( y = \sigma(\langle x, \beta \rangle) \)$ using nonlinear transformers, focusing on inference-time sample complexity. The authors propose a two-stage training approach: (1) a single gradient descent step on the attention matrix to capture feature structure, and (2) ridge regression on the MLP layer to fit the nonlinear transformation. Unlike *Oko et al. (2024b)*, which assumes a low-dimensional feature structure, this work removes this restriction while achieving better inference-time sample complexity (Table 4.1). The main findings show that nonlinear transformers outperform non-adaptive methods (e.g., kernel methods) by extracting task-specific features at inference time, without parameter updates. Moreover, the softmax attention mechanism plays a crucial role in feature learning, helping transformers surpass the Correlational Statistical Query (CSQ) lower bound, making them more sample-efficient than CSQ-based methods. The study provides rigorous theoretical guarantees on both pretraining complexity and inference complexity , demonstrating that pretrained transformers can adaptively learn features with superior statistical efficiency compared to traditional approaches. ## Update after rebuttal Thank you for the thoughtful and detailed rebuttal. The clarification of the theoretical contributions—particularly the improved inference-time sample complexity and the removal of low-dimensional assumptions—meaningfully strengthens the paper. The additional experiments further support the main claims. Based on this, I am increasing my score and recommend acceptance. Claims And Evidence: The paper provides theoretical justification for its claims, particularly in demonstrating improved inference-time sample complexity over kernel methods and surpassing the CSQ lower bound. The main results are well-supported by formal optimization guarantees and statistical complexity bounds. Additionally, Table 4.1 effectively contrasts the findings with prior work (*Oko et al., 2024b*), highlighting the relaxed dimensional assumptions and improved efficiency. However, some claims could benefit from stronger empirical validation. The synthetic experiments with GPT-2 provide limited practical evidence, as they do not comprehensively test real-world generalization or scalability. Additionally, while the softmax attention mechanism’s role in feature extraction is theoretically well-motivated, a deeper ablation study on its necessity and effectiveness would strengthen the argument. Overall, the core theoretical claims appear well-supported, but further empirical evidence would enhance the credibility of the results. Methods And Evaluation Criteria: Yes, both the studied setting (in-context learning of single-index models with nonlinear transformers) and evaluation criteria (inference-time sample complexity) make sense. The single-index model is a well-established framework in statistical learning and serves as a meaningful testbed for understanding how transformers learn features at inference time. The focus on inference-time sample complexity is particularly relevant, as it directly measures the efficiency of transformers in extracting task-specific information without parameter updates—aligning well with real-world applications of in-context learning. Theoretical Claims: The theoretical claims, particularly regarding inference-time sample complexity and softmax attention’s role in feature extraction, are well-structured and follow established techniques. However, I did not verify every step in detail. Key results, such as the gradient descent analysis on the attention matrix and the Hermite expansion argument, appear reasonable but would require closer scrutiny to confirm correctness. No obvious errors stood out, but a deeper review is needed for full validation. Experimental Designs Or Analyses: The synthetic experiment using GPT-2 provides a reasonable comparison between in-context learning with transformers and kernel methods, aligning with the paper’s theoretical claims. The choice of a single-index model with a Hermite polynomial link function is appropriate for testing inference-time feature learning. Supplementary Material: While I did not verify every step in detail, the arguments seem consistent with the claims in the main paper. No obvious gaps or inconsistencies stood out, but a deeper review would be needed to confirm full correctness. Relation To Broader Scientific Literature: I agree—showing that nonlinear transformers achieve better sample complexity than kernel methods is a strong contribution to the theoretical literature on in-context learning (ICL). This work extends Oko et al. (2024b) by improving sample complexity bounds under more general assumptions, reinforcing that transformers can surpass fixed-basis methods through adaptive feature learning. More broadly, the paper connects to research on neural network adaptivity, highlighting how softmax attention enables efficient inference-time learning. These findings contribute to the study of statistical efficiency in deep learning, showing that transformers can generalize more efficiently than traditional approaches. Essential References Not Discussed: To the best of my knowledge, the paper appropriately cites and discusses relevant prior work. Other Strengths And Weaknesses: Strengths The paper tackles a more challenging setting than most theoretical works in the literature, which often rely on simplified transformer models. By incorporating softmax attention and an MLP layer, the study resembles real-world transformer architectures, making its findings more applicable. This more realistic setup allows the authors to establish better sample complexity results than *Oko et al. (2024b)*, demonstrating the advantages of nonlinear transformers for inference-time feature learning. Weaknesses 1. Presentation and Organization Need Improvement: - The paper’s structure is difficult to follow, making it harder to grasp the key contributions at first glance. - Theorem 1 could be stated more clearly to improve readability. - The distinctions from Oko et al. (2024b) are spread throughout the paper, making them hard to evaluate. A dedicated comparison section in the related work would clarify these contributions. 2. Limited Experimental Evaluation: - The paper only includes a single synthetic experiment, which is not sufficient to validate the theoretical claims in real-world settings. - Adding an experiment on real-world data would significantly strengthen the empirical support for the paper’s conclusions. Other Comments Or Suggestions: - Questions For Authors: - Distinctions from Oko et al. (2024b): While the paper improves sample complexity results compared to *Oko et al. (2024b)*, the distinctions are spread across different sections. Could you provide a more explicit and structured comparison in the related work section to clarify the key differences? - Role of Softmax Attention: The paper argues that softmax attention plays a crucial role in inference-time feature learning. Have you considered ablation studies or comparisons with linear attention to isolate its impact more clearly? - Empirical Evaluation: The experimental validation is limited to a single synthetic experiment. Would it be possible to evaluate the method on real-world datasets, and if so, what would be the key challenges? - Scalability of Pretraining Requirements: The pretraining task/sample complexity scales with \( d^{\Theta(ie(\sigma))} \). In practical applications, would this requirement be computationally feasible, especially for large-scale transformers? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback. We address the technical comments and questions below. **Distinctions from Oko et al. (2024b)** Based on your suggestion, we will include a dedicated comparison section in the paper; here we briefly summarize the key differences. - **Improved inference-time sample complexity** – In Oko et al. (2024b), the derived inference-time sample complexity is $O(r^{\Theta(deg(\sigma_*))})$, which corresponds to the sample complexity of kernel methods on an $r$-dimensional subspace. In our paper, we derived an improved sample complexity of $O(r^{\Theta(gen(\sigma_*))})$ by showing that nonlinear transformers are capable of inference-time feature learning (beyond the kernel regime). - **Absence of low-dimensional assumption** – While Oko et al. (2024b) assumes that the dimensionality $r$ of pretraining task distribution is much smaller than the ambient dimensionality $d$, our analysis does not rely on this assumption. - **Theoretical analysis** – To derive the improved sample complexity $O(r^{\Theta(gen(\sigma_*))})$, we demonstrate that the nonlinear transformation applied to the output label $\{y_i\}$ reduces the information exponent of the link function $\sigma_*$ to its generative exponent. In particular, it is essential to show that the degree-$gen(\sigma_*)$ term in the Hermite expansion of the nonlinearly transformed labels (via the softmax transformation) is non-vanishing. This requires a careful tracking of the corresponding Hermite coefficients, which constitutes one of the most technically challenging parts of the analysis. This aspect is not addressed in Oko et al. (2024b), where it is only shown that the linear attention module computes correlations on the raw label $\{y_i\}$ and a fixed kernel basis $\{\phi(x_i)\}$ (constructed by a fixed MLP block). **Additional experimental evaluation** Thank you for the suggestions on experiments. We make the following remarks. - Our goal is to rigorously study the statistical efficiency of ICL in a well-defined target function class (single-index models). Therefore, similar to prior works that studied ICL for linear regression [Garg et al., 2022; Von Oswald et al., 2023; Ahn et al., 2023; Mahankali et al., 2023a; Zhang et al., 2023], we conducted experiments (on GPT2 models) in a controlled synthetic setting to probe the sample complexity scaling. Pretraining on real-world datasets where the information/generative exponent of the target function is unknown is tangential to our main contribution. - To further validate the derived test-time sample complexity rate, we have added additional experiments to validate that the sample size scaling is indeed beyond CSQ (please refer to [[link]](https://osf.io/p87yc?view_only=be342dc212bb49cb8f72309fdc37d376), see also our response to reviewer 7xsp for details). Following your suggestion, we are currently running ablation studies to probe the necessity of the Softmax attention. We would be happy to clarify any concerns or answer any questions that may come up during the discussion period.
null
null
null
null
null
null
Unconstrained Robust Online Convex Optimization
Accept (poster)
Summary: The paper presents an algorithm to solve unconstrained OCO when the observed gradient might be corrupted. They first present an algorithm when $G := \max_t ||g_t||$ is known, by truncating the observed gradient to ensure the norm is less than a specified $h_t$, and adding a regularizer to limit the growth of $||w_t||$. When using $h_t = G$, this results in a guarantee in $O(||u|| G (\sqrt{T} + k)$ where $k$ measures the amount of corruption. When $G$ is not known, the authors use a clever doubling trick to ensure that in each epoch to ensure that $h_t$ is close to $\max_t ||w_t||$. They also add a quadratic regularizer when doubling to ensure that the error from truncation remains bounded. This result in a similar regret guarantee with an additional $O(||u||^2 + G^2) k$. ## update after rebuttal I decided to maintain my score. Claims And Evidence: All claims seems to be thoroughly proved. Methods And Evaluation Criteria: The criteria used is the regret, which is standard in online learning. Theoretical Claims: The authors explain the ideas behind each of the theorems. I read mainly the ideas detailed in the main body of the paper, but not the proofs in appendix. Experimental Designs Or Analyses: N/A Supplementary Material: I reviewed appendices A-D. Relation To Broader Scientific Literature: As far as I know, this is the first paper to tackle OCO with corrupted gradients in the unconstrained setting. - van Erven et al., 2021 tackled the case of potential outliers in the standard constrained setting, which is a special case of this paper. Essential References Not Discussed: None that I can think of. Other Strengths And Weaknesses: Strength: 1- It is the very first approach to the unconstrained robust OCO in the literature 2- They combine well several tricks, from unconstrained OCO and robust OCO, to solve it. They also provide new ones that arise from this unique setting Weaknesses: Other Comments Or Suggestions: Typos: - line 20, abstract: $||u|| \to u$. - line 69 $1\{True\} = 0 \to 1 - Multiple times $|u| \to ||u||$ or $|w_t| \to ||w_t||$. - line 697: unclear what terms are in the $\sum_{t=1}^T$, maybe add a parenthesis. Questions For Authors: Your algorithm focus on $|| ||$ being the euclidean norm but could it be applied for other norms with their associated dual norm? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the positive feedback. Regarding norms, our algorithm extends to other norm settings as long as $k$ is measured accordingly and dual norms. We will also revise the manuscript to fix typos.
Summary: The paper addresses the case of online learning on the unconstrained domain with corrupted gradient feedback with no assumptions on the corruptions nature. The paper provides an algorithm with regret guarantee $\|u\|G(\sqrt{T} + k)$ for the case when the Lipschitz constant is known, and provide the algorithm extension with adaptive thresholding with an extra additive penalty of $(\|u\|^2 + G^2)k$ where $k$ reflects the total corruption level, and $u$ is any comparison point. The paper provides very thorough introduction, discusses all the challenges of the considered problem setup, carefully provides the analysis of both cases of known and unknown Lipschitz constant giving a lot of intuition and clear flow. ## Update after rebuttal: I thank the authors for the response. I have also read the other reviews, and I decided to keep my initial score. Claims And Evidence: Yes. However, it should be considered that in the non-differentiable case the sub-gradients might be non-unique. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, just the ones in the main body of the paper, I did not check the appendix. There is an unclear part in line 331, I wrote a question regarding it in the questions section. Experimental Designs Or Analyses: Not applicable Supplementary Material: No Relation To Broader Scientific Literature: They provide new regret bounds for the unconstrained online convex learning with corrupted feedback. The work in certain sense extends to the setting on the unconstrained OCO and improves upon a method in van Erven et al. (2021) which considered bounded domain. The previous works on unconstrained OCO only considered stochastic unbiased feedback perturbations. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: The paper is very clearly written and is a pleasure to read. All statements seem to be thorough, relations to the previous related works are discussed. Other Comments Or Suggestions: - In line 198, right side, I believe that the sum of $r_t$ should be lower bounded not by the $O(\cdot)$ but rather $\Omega$ notation since it is a lower bound. Just a rate in $O$ will not be enough, even if $r_t$ grows with the same rate as error, but slower 2 times, their subtraction with not be cancelled out but growing. - line 178 right side: Upper bound on ERROR I believe should be $2kG \max_t\|\omega_t\|$ instead of $kG \max_t\|\omega_t\|$. That will influence also on the constant $c$ later in line 218 right side and further. Questions For Authors: - line 331: If we want to bound a truncation error, $E_{\bar P}$ defined in eq. (10) as $\|g_t - \tilde g_t^c\|$, why here do we bound $\|\tilde g_t - \tilde g_t^c\|$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. On the non-differentiable case: we agree that OCO via linearized losses naturally extends to subgradients, and thus our results apply to non-differentiable convex functions as well. Regarding the term $E_{\bar P}$ (line 331, left), this should indeed be $g_t - \tilde{g}_t^c$ as suggested. This was a typo in the main text; the expression was handled correctly in Appendix (line 1282) and was referred to as $offset_1$. We will revise the manuscript accordingly
Summary: The authors investigate online convex optimization (OCO) in an unconstrained domain under corrupted gradient feedback. They introduce a new measure of corruption, denoted as $k$, which accounts for both the number of corrupted rounds and the magnitude of gradient deviations. Given $k$, their proposed algorithm achieves a regret bound of $\mathcal{O}(\| u \| G (\sqrt{T} + k))$ for any comparator, provided the gradient norm bound for uncorrupted rounds is known. In cases where the gradient norm bound is unknown, they propose a filtering approach that guarantees a similar regret bound with an additional term of $\mathcal{O}( (\| u \|^2 + G^2) k)$. Additionally, they establish matching lower bounds (up to logarithmic factors) for any choice of comparator $u$. ## update after rebuttal I would like to thank the authors for their response, and I have decided to retain my original positive score. Claims And Evidence: The claims are theoretical and are supported by thorough discussions and arguments presented in the main text, which appear sound and reasonable. However, I have not reviewed the proofs in the supplementary material in detail. Methods And Evaluation Criteria: The paper only has theoretical contributions. Theoretical Claims: The discussions and arguments presented in the main text appear sound and reasonable. However, I have not reviewed the proofs in the supplementary material in detail. Experimental Designs Or Analyses: The paper only has theoretical contributions. Supplementary Material: I have not reviewed the proofs in the supplementary material in detail. Relation To Broader Scientific Literature: The paper borrows some ideas from Zhang & Cutkosky (2022), Cutkosky & Mhammedi (2024), and van Erven et al. (2021). In particular, it adopts the composite loss function method from Zhang & Cutkosky (2022) to handle large corrupted gradients. The authors propose a novel filtering technique to handle unknown bounds on the gradient norms of uncorrupted rounds, which serves a similar purpose to the filtering technique used in van Erven et al. (2021). Essential References Not Discussed: The references are cited and discussed adequately. Other Strengths And Weaknesses: In my view, the paper’s primary strength lies in the development of a filtering technique to address the unknown gradient norm bound and associated challenges in an unbounded domain. The paper is well-written, providing a strong motivation for the problem while clearly articulating both the problem itself and the main obstacles in addressing it. The authors effectively discuss their techniques and results, making the work accessible and easy to understand. The primary weakness of the paper is its reliance on established ideas, especially their similarities to Zhang & Cutkosky (2022), which diminishes the overall novelty of the work. The key technical tools employed are well-known, which somewhat limits the scope of the contributions. Other Comments Or Suggestions: A few minor typos: 1. Line 187, col 2: \cite should be replaced with \citep. 2. Line 330, col 1: "the a" appear together Questions For Authors: The paragraph before Corollary 6.3 claims that the regret with respect to the baseline point $u = 0$ is **constant** no matter what $k$ is. However, the regret bound in Corollary 6.3 has a $\mathcal{O}(k)$ dependency. Is there something that I am missing here? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for validating the theoretical contributions. Regarding the constant regret interpretation after Corollary 6.2, it relies on setting $\tau_D = O(1/k)$, which is an initialization parameter used in the doubling trick to track $\max_{i \le t} |w_t|$. Note that if $k$ is unknown, we may set $\tau_D=1/\sqrt{T}$ to achieve constant Regret(0) for and $k\le \sqrt{T}$, which the range in which our algorithm suffers $\tilde O(\sqrt{T})$ regret overall. Apologies for not specifying this properly in the theorem statement - we will clarify this in the manuscript and correct the noted typos.
Summary: The paper studies the challenging problem of online convex optimization (OCO) in an unconstrained domain under the presence of adversarially corrupted gradient feedback. Unlike classical OCO, where gradient estimates are assumed to be accurate or only subject to benign noise, this work makes no statistical assumptions on the corruptions. Instead, it considers arbitrary (and potentially adversarial) deviations in the gradient signals. A regret bound is provided in the paper. The matching lower bounds are provided that demonstrate the tightness of the upper bounds under certain regimes. ## update after rebuttal I appreciate the authors for the rebuttal. Although it does not address my concerns, I found some insight for the parameter setting issue. I decide to keep my original score. Claims And Evidence: The claims of the paper are clear and have evidence. Methods And Evaluation Criteria: The theoretical parts of the paper is clear, but I think this paper lacks some convincing numerical studies to support the theoretical findings. Theoretical Claims: While the theoretical contributions of this paper is interesting, the paper would benefit from a more extensive discussion on the practical implications of the algorithms. For example, insights into how the proposed methods perform on real data or in simulated adversarial settings would strengthen the overall impact. Although the paper analyzed the regret bounds under more general assumptions, the bound is still built based on an implicit assumption that the gradient is bounded, which may not be true for some real applications (or the upper-bound is very large). Experimental Designs Or Analyses: 1. The numerical study of the proposed algorithm is limited. Including more experiments on simulation data and real datasets that illustrate the algorithm’s performance under various corruption regimes could provide practical validation of the theoretical findings. 2. Parameter Analysis: A sensitivity analysis regarding the choice of hyperparameters would be a welcome addition, potentially guiding practitioners in implementing the methods. Supplementary Material: I briefly read the theoretical analysis part of the paper. Relation To Broader Scientific Literature: The theoretical findings of this paper are interesting which can contribute to the area of online optimization algorithms. Essential References Not Discussed: NA. Other Strengths And Weaknesses: Please see the above comments. Other Comments Or Suggestions: Please see the comments above. Some numerical evidences are highly recommended to be added. Questions For Authors: Please see the comments above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. As this work focuses on the theoretical foundations of robust online convex optimization, we look forward to systematically investigate practical applications in future work. In terms of hyper-parameter, the only required user input is the corruption level $k$. Even without exact knowledge of $k$, our theoretical results suggest setting $k = O(\sqrt{T})$ guarantees the classical OCO regret bound $\tilde{O}(\sqrt{T})$ as long as the true corruption level is less than $\sqrt{T}$, which tolerates a significant range of corruption problems. We also look forward to an empirical study of the impact of mis-specifying $k$. In terms of the scenario of gradient-norm being large, existing theory in adversarial online learning suggests regret scales with the maximum gradient norm. Our results actually help combat this somewhat: a few large outlier gradients could be modeled as “corruptions”, and our regret would not scale with their value. Nevertheless, we acknowledge this limitation and are interested in future developments that mitigate this when large gradients are common.
null
null
null
null
null
null
Improving the Continuity of Goal-Achievement Ability via Policy Self-Regularization for Goal-Conditioned Reinforcement Learning
Accept (poster)
Summary: This paper addresses the issue of discontinuity in goal-achievement capabilities in Goal-Conditioned Reinforcement Learning (GCRL) algorithms. First, this paper theoretically proof that reusing successful trajectories can help achieve adjacent goals, but policy discrepancies must be controlled to avoid performance degradation. To tackle this, this paper proposes a Margin-Based Policy Self-Regularization (MSR) approach, which constrains policy differences between adjacent goals within a minimal threshold. The approach is evaluated on two robotic arm control tasks and an aircraft control task, demonstrating improved continuity in goal-achievement ability. Claims And Evidence: yes. Methods And Evaluation Criteria: yes. Evaluation metrics include policy discrepancy (KL divergence), return gap, and cumulative rewards, which are appropriate for the problem. Theoretical Claims: Yes. The theoretical results are well-structured and logically sound. Theorem 3.3 correctly bounds cumulative rewards in terms of KL divergence, providing rigorous justification for the method. Experimental Designs Or Analyses: Yes. This paper conducts performance analysis on different GCRL algorithms. Supplementary Material: Yesw. The supplementary material includes proofs and additional experiments. Theoretical proofs (Appendix A), Additional ablations (Appendix D). Relation To Broader Scientific Literature: The paper positions itself within GCRL literature. Essential References Not Discussed: Yes. It could be good if this paper can discuss the relationship between MSR and TRPO, PPO methods in designing the MSR loss. Other Strengths And Weaknesses: Strengths: 1. The paper effectively identifies and addresses the challenge of discontinuity in goal-achievement capabilities within GCRL. This clear motivation underpins the design of MSR. 2. The authors present a robust theoretical foundation for their work, enhancing the credibility and depth of the proposed MSR method. 3. The inclusion of a thorough ablation study strengthens the paper by demonstrating the impact of various components of the MSR approach. Weaknesses: 1. The MSR method introduces at least three key hyperparameters ($\lambda$, $\beta$, $\epsilon$) that need careful tuning. This requirement may increase the complexity of applying the approach across different environments and tasks. 2. The ablation study indicates that the optimal value for the hyperparameter λ is 0.001, which seems quite small. This raises questions about the significance of the MSR loss contribution. Additionally, the lack of explanation for Figure 7's results makes it challenging to understand the limited impact observed. A comparative analysis would be beneficial to elucidate this point. 3. Theoretical Equations (1) and (2) present symmetric upper and lower bounds. There is a concern that these bounds might be too loose in practical settings, potentially affecting the precision of the theoretical guarantees. Other Comments Or Suggestions: 1. It would be beneficial for the paper to analyze whether the policy regularization introduced by MSR adds significant computational overhead during inference. 2. The paper should consider incorporating a convergence analysis of the proposed MSR method. As shown in Figure 4(b), the cumulative reward appears not to have converged, raising concerns about the stability and reliability of the learning process. A detailed convergence analysis would provide insights into the long-term performance and robustness of the MSR approach. Questions For Authors: 1. The theoretical results assume a distance-based reward. Would MSR still work for other reward types? 2. Does policy self-regularization introduce significant training overhead? How does MSR impact convergence speed? 3. Why the optimal value of λ is quite small? Could the authors add some analysis? 4. Is Figure 4(b) converged? 5. Equations (1) and (2) have symmetric upper and lower bounds, are the bounds tight? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: `... discuss the relationship between MSR and TRPO, PPO methods in designing the MSR loss.` We denote the policy at the t-th iteration as $\pi_{\theta_t}$, with $\theta$ denoting policy parameters. $\pi_{\theta_t}(\cdot|s,g)$ maps state $s$ and goal $g$ to an action distribution. TRPO and PPO's key principle is to limit policy updates by constraining the KL divergence between policies in consecutive training iterations, $E_{g\sim p_{dg}}D_{KL}(\pi_{\theta_{t-1}} (\cdot|\cdot,g),\pi_{\theta_t}(\cdot|\cdot,g))$. In contrast, MSR restricts the policy's action distribution variation across goals within the same iteration, $E_{g\sim p_{dg},\epsilon \sim(-\epsilon',\epsilon')}D_{KL}(\pi_{\theta_t}(\cdot|\cdot,g),\pi_{\theta_t}(\cdot|\cdot,g+\epsilon))$. In essence, TRPO, PPO, and MSR use KL divergence for policy constraints but focus on distinct optimization objectives. This clarification will be added to our paper. `Would MSR still work for other reward types?` MSR is effective for all reward types: 1. Theorem 3.4 makes no assumptions about the reward function. The return gap remains bounded, albeit with a complex expression. For clarity, we derived a simpler bound under the distance-based reward, facilitating understanding of the relationship between $|J(\pi_{g_1})-J(\pi_{g_2})|$ and $d_{KL}(\pi_{g_1}(\cdot|s),\pi_{g_2}(\cdot|s))$. 2. In our experiments, we used both distance-based rewards (in Reach and VVC) and sparse rewards (in Push). Results demonstrate MSR's effectiveness across these different reward settings. `Does MSR introduce significant training overhead? How does MSR impact convergence speed?` MSR adds training overhead without influencing inference, due to added regularization only in policy optimization. For on-policy RL, where the main cost is environment sampling, MSR's effect is negligible, with only a 2.14% increase. In off-policy RL, which leverages replay buffer data, the impact is more pronounced: a 21.14% increase for MSR-GC-SAC and 16.02% for MSR-HER. Figs. 13, 17, and 21 reveal MSR's convergence rate is on par with baselines. Overall, despite the slight added training overhead, MSR markedly improves policy continuity in achieving goals. ||GC-PPO|MSR-GC-PPO|GC-SAC|MSR-GC-SAC|HER|MSR-HER| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Wall Clock Time (s)|3498|3573|2621|3175|5096|5912| _**Note**: Results come from PPO (1e7 steps), SAC (1e5 steps), and HER (1e5 steps) on VVC._ `Why the optimal value of λ is quite small?` We address your concerns from three aspects: 1. MSR is designed to lower $D_{KL} (\pi_{g_1}(\cdot|s), \pi_{g_2}(\cdot|s))$ without being excessively small. While 0.001 may seem trivial, our ablation studies confirm it sufficiently meets our objectives. 2. We've included a [plot](https://github.com/anonymouslinks/anonymous/blob/main/her_losses_lambda_1e-3_in_training.png) showing training loss trends with $\lambda = 0.001$. MSR loss is comparable to entropy and critic losses and an order of magnitude less than actor loss, demonstrating its impact on policy optimization despite the small $\lambda$. 3. Small coefficients for auxiliary losses are standard in RL. The primary goal is to maintain the RL objective's dominance, with other losses in a supporting role. For instance, entropy loss in PPO typically falls within $[10^{-5}, 10^{-2}]$, as seen in the [Stable Baselines3 Zoo](https://github.com/DLR-RM/rl-baselines3-zoo/blob/master/hyperparams/ppo.yml). `Are the bounds of Eq.(1) and (2) tight?` Eq.(1) and (2) can be unified as $|J(\pi_{g+\epsilon})-\frac{1}{1-\gamma}\sum_{(s,a)}\rho_{\pi_g}(s,a)r(s,a,g+\epsilon)| \le |\frac{2\sqrt{2}R_{\text{max}}}{(1-\gamma)^2}\sqrt{D_{KL}(\pi_{g+\epsilon}(\cdot|s),\pi_g(\cdot|s))}|$. We examine the tightness of this bound up to a constant with [a particular MDP instance](https://github.com/anonymouslinks/anonymous/blob/main/MDP.png). In the case of $g_1$ with policy $\pi_{g_1}(\cdot|s_0)=(0.6,0.4)$: 1. For $\pi_{g_2}$ identical to $\pi_{g_1}$, the left-hand side equals $\frac{0.2\gamma}{1-\gamma}$, while the right-hand side is $\frac{0.4\gamma}{1-\gamma}$, showing a constant factor difference. 2. When $\pi_{g_2}$ is slightly different, with $\pi_{g_2}(\cdot|s_0)=(0.55,0.45)$, the left-hand side becomes $\frac{0.15\gamma}{1-\gamma}$, and the right-hand side is $\frac{0.4\gamma + 0.2}{1-\gamma}$, again differing by a constant factor. We will include the detailed proof in the Appendix of manuscript. `(1) Is Figure 4(b) converged?. (2) The lack of explanation for Figure 7's results ...` Thank you for your suggestion. We have increased the training steps for HER and MSR-HER on Push and updated Fig.4 and added a convergence analysis in the manuscript. Fig.7 illustrates that a larger MSR strength $\lambda$ results in a stronger constraint on $D_{KL}(\pi_{g+\epsilon}(\cdot|s),\pi_g(\cdot|s))$ (values approach 0). We will provide a detailed analysis of the experimental data and corresponding conclusions in the manuscript.
Summary: Reaching adjacent goals utilizing the same policy is non-trivial due to the limited robustness of policy improvement. The paper studies discontinuity in goal-achievement observed in Goal-Conditioned Reinforcement Learning (GCRL). Theoretically, the paper identifies constraints between goal reaching policies of the current goal and adjacent goal in order strike a balance between similarity and policy diversity which can hinder policy performance. This leads to the creation of Margin-Based Policy Self-Regularization (MSR) which regularizes a given RL policy with the KL constraint between the current and adjacent goal reaching policies. MSR can be combined with any RL policy and on the evaluated benchmarks, the method shows performant goal reaching ability while minimizing the KL constraint. Claims And Evidence: Please refer to strengths and weaknesses. Methods And Evaluation Criteria: Please refer to strengths and weaknesses. Theoretical Claims: Please refer to strengths and weaknesses. Experimental Designs Or Analyses: Please refer to strengths and weaknesses. Supplementary Material: Yes, the appendix. Relation To Broader Scientific Literature: Please refer to strengths and weaknesses. Essential References Not Discussed: Please refer to strengths and weaknesses. Other Strengths And Weaknesses: ### Strengths * The paper is well written and easy to follow. * Authors have presented the theoretical results in an intuitive manner. ### Weaknesses * **Theory To Practice**: My main concern is the connection of theoretical results to the practical algorithm. Theoretical claims present that previously trained policies can reach similar and adjacent goals while obeying similarity constraints, ie- the goal reaching policies are similar but not the same. However, I am struggling to understand how this connects to the empirical method of regularizing the policy with the divergence between policies. Intuitively, we want to balance between the similarity between the two policies but the method regularizes the RL objective using this as a constraint. It would be more suitable to automatically adjust the policy using a distillation scheme or perhaps learn a temperature parameter. Could the authors explain on how clipping the KL term connects to theoretical results? * **Empirical Evaluation**: The current empirical evaluation is limited to only a few tasks and evaluation of $J(\pi)$ and $\text{KL}(\pi_{g}||\pi_{g+\epsilon})$ as optimization metrics. However, the paper does not answer the central question of how does MSR benefit the goal reaching abilities of RL algorithms? Does MSR help the policy reach farther goals? Does MSR benefit long-horizon reasoning of RL agents? How does MSR provide robustness to environmental perturbations? Another important aspect to validate is the empirical value of theoretical claims. Do the bounds obtained by authors hold for KL constraints? When $\text{KL}(\pi_{g}||\pi_{g+\epsilon}) = 0$, does the policy actually reach more adjacent goals? In its current form, the work only explores performance of goal reaching policy using prefixed metrics. * **Contribution and Utility**: I am struggling to understand the novel contribution of the work and its utility for the RL community. How does the algorithm benefit RL algorithms since KL regularization is already a well known technique for policy improvement as well as GCRL. Besides, KL minimization is akin to likelihood maximization which raises the question of the new findings the paper contributes. Perhaps the main contribution lies in reaching adjacent goals via KL constraint as a distribution matching technique. In that case, it would be helpful for authors to study unseen environments or few-shot goal reaching improvement of the policy. Other Comments Or Suggestions: NA Questions For Authors: Please refer to strengths and weaknesses. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: `W1: Theory To Practice` In our work, we aim to address the issue of ensuring that if a policy can achieve a goal $g$, it should also be capable of achieving goals in the vicinity of $g$, which we denote as $g+\epsilon$. From the perspective of cumulative rewards, our objective is to minimize $E_{g\sim p_{dg},\epsilon \sim(-\epsilon',\epsilon')}|J(\pi(\cdot|\cdot,g))-J(\pi(\cdot|\cdot,g+\epsilon))|$. Our method is informed by the following insights: 1. We show analytically that $E_{g\sim p_{dg},\epsilon \sim(-\epsilon',\epsilon')} |J(\pi(\cdot|\cdot,g))-J(\pi(\cdot|\cdot,g+\epsilon))|$ can be bounded by $KL[\pi(\cdot|\cdot,g)||\pi(\cdot|\cdot,g+\epsilon)]$, as concluded in Theorem 3.4, Corollary 3.5 and 3.6. This led us to consider constraining $KL[\pi(\cdot|\cdot,g)||\pi(\cdot|\cdot,g+\epsilon)]$ during policy optimization. 2. Theorem 3.3 suggests that the difference between $\pi(\cdot|\cdot,g)$ and $\pi(\cdot|\cdot,g+\epsilon)$ should not be entirely eliminated, implying that $KL[\pi(\cdot|\cdot,g)||\pi(\cdot|\cdot,g+\epsilon)]$ should not be optimized to be too small. Based on these findings, we develop the margin-based policy self-regularization method, as outlined in Eq.(6). The $KL$ term in the equation corresponds to the first point, and the max operation corresponds to the second point. `W2: Empirical Evaluation` We address your concerns from the following three aspects: **Firstly**, we believe that our experimental design aligns well with our theoretical derivations. In conjunction with our response to W1, our method aims to reduce $E_{g\sim p_{dg},\epsilon \sim(-\epsilon',\epsilon')}|J(\pi(\cdot|\cdot,g))-J(\pi(\cdot|\cdot,g+\epsilon))|$ by constraining $KL[\pi(\cdot|\cdot,g)||\pi(\cdot|\cdot,g+\epsilon)]$, thereby achieving the objective of enabling the policy to achieve not only $g$ but also goals in the vicinity of $g$. When designing our experiments, we sought to answer the following questions: 1. Can MSR reduce $KL[\pi(\cdot|\cdot,g)||\pi(\cdot|\cdot,g+\epsilon)]$? 2. whether a reduction in $KL[\pi(\cdot|\cdot,g)||\pi(\cdot|\cdot,g+\epsilon)]$ leads to a decrease in $E_{g\sim p_{dg},\epsilon \sim(-\epsilon',\epsilon')}|J(\pi(\cdot|\cdot,g))-J(\pi(\cdot|\cdot,g+\epsilon))|$? 3. can MSR enhance the policy's cumulative rewards, $J(\pi)$? The first two questions directly correspond to our theoretical derivations, while the third question assesses the practicality of our method, indicating that improving the continuity of the policy's goal-achievement can also enhance its ability to obtain cumulative rewards. Therefore, we consider our experimental design to be well-aligned with our theoretical development. **Secondly**, our focus is on improving policy continuity for goal achievement by reducing $E_{g\sim p_{dg},\epsilon \sim(-\epsilon',\epsilon')}|J(\pi(\cdot|\cdot,g))-J(\pi(\cdot|\cdot,g+\epsilon))|$, not on reaching farther goals, long-horizon reasoning, or robustness to environmental perturbations. **Thirdly**, our Section 4.4 ablation study with $\lambda=1$ shows $KL(\pi_g||\pi_{g+\epsilon})$ nearing 0 (Fig.7(a)), but $J(\pi)$ is lowest (Fig.7(b)). This observation aligns with Theorem 3.3, emphasizing the need to balance the reduction of $KL(\pi_g||\pi_{g+\epsilon})$ to maintain goal-achievement continuity without overly compromising performance. `W3: Contribution and Utility` The KL divergence is commonly used as a measure of distance to constrain policies in policy optimization. In the context of multi-goal settings, we compare MSR method with two KL-based policy constraint approaches: ||Optimization Objective| |:-:|:-:| |TRPO[1]|$E_{g\sim p_{dg}}KL[\pi_{\theta_{t-1}}(\cdot\vert\cdot,g)\Vert\pi_{\theta_t}(\cdot\vert\cdot,g)]$| |Offline-to-Online[2]|$E_{g\sim p_{dg}}KL[\pi_0(\cdot\vert\cdot,g)\Vert\pi_{\theta_t}(\cdot\vert\cdot,g)]$| |MSR|$E_{g\sim p_{dg},\epsilon \sim(-\epsilon',\epsilon')}KL[\pi_{\theta_t}(\cdot\vert\cdot,g)\Vert\pi_{\theta_t}(\cdot\vert\cdot,g+\epsilon)]$| 1. TRPO aims to prevent the policy from changing too much during optimization, thus it achieves this objective by constraining the KL divergence between policies in consecutive training iterations. 2. The Offline-to-Online method aims to prevent the policy from forgetting the knowledge learned offline during online learning, by constraining the KL divergence between the offline-learned policy, $\pi_0$, and the policy currently being learned online, $\pi_{\theta_t}$. 3. MSR, on the other hand, seeks to enhance the policy's continuity of goal-achievement ability by limiting the divergence between policies corresponding to adjacent goals. **In summary, although all these methods utilize KL divergence, from the perspective of optimization objectives, MSR is fundamentally distinct from other KL-based methods.** [1] Schulman, John, et al. Trust region policy optimization. ICML,2015. [2] Baker, Bowen, et al. Video pretraining (vpt): Learning to act by watching unlabeled online videos. NeurIPS,2022. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response to my comments. After going through the authors' rebuttal and response to other reviewers, my concerns have been addressed and I have decided to raise my score. I thank the authors for their efforts.
Summary: In their present paper, the authors address an evident issue appearing in goal conditioned RL: discontinuity between control policies even in cases of adjacent goals, i.e. where their respective goals are only marginally separated by some distance \eps. The insights generated by a comprehensive analysis of the mentioned effect, which takes into account the KL divergence as a central measure to define policy discrepancy, are consequently turned into a new policy regularization method, MSR (Margin-Based Policy Self-Regularization), which can be attached to common off- and on-policy RL algorithms, such as SAC, HER and PPO, in order to improve not only the continuity properties between adjacent goal policies but also the overall policy performance (as measured by the total return value over all achievable goals). Based on three individual benchmark environments, the authors provide sound experimental support for their analytical claims and demonstrate the practical applicability of their method. Claims And Evidence: All claims in the present work are supported by both theoretical arguments, mathematical proofs and experimental evidence. All terms and pre-requisite are sufficiently introduced and explained, as there also is a good balance of showing essential parts of their analysis in the main text and providing further details in the appendix. The authors present a thorough experimental analysis of their central hypotheses. My only remark is that a higher number of random seed varying trials would improve their statistical evaluation of the experimental results, as the current number (5 trials) yields relatively large statistical error bars on the individual KPIs of Table 1. Methods And Evaluation Criteria: I don’t find any major flaw or concern in the continuity analysis for adjacent goal conditioned policies, as provided in Section 3. The proposed method, MSR, is reasonably introduced as a direct consequence of the analytical results and, as the authors claim, can be used to augment essentially any GCRL policy algorithm as an addition regularization term added to the reward definition. It authors thereby introduce a reasonable and relatively low number of (hyper-)parameters to enable fine-tuning of their regularization term. A sufficiently detailed ablation study for all mentioned hyper-parameters is provided as part of the experimental evaluation. The evaluation criteria chosen (essentially three metrics shown in columns of Table 1) are reasonably chosen to highlight and support the claims and conclusions of the authors. As mentioned in the previous section, it appears that all results reported in Table 1 still come with a comparably large statistical error. In a few cases, the error become large enough to weaken the substance of conclusion taken by the authors, as they may also be result of random fluctuations. Reducing this error by increasing the number of random seed trials would help to add significance to the results and should be considered in the final version of the paper. Theoretical Claims: I checked soundness and plausibility of all equations as part of the main text, and some (but not all) of the more detailed derivations provided in the appendix. Experimental Designs Or Analyses: To my opinion, the authors have chosen a reasonable selection of benchmark comparisons, including representatives of both off-policy and on-policy GCRL SOTA algorithms. The simplicity of extending each candidate with the proposed regularization term helps to have a clear structured comparison between methods with and without the MSR extension. The visual illustrations are generally well-chosen and supportive to understand the most relevant effect. However, the authors must improve the readability of axis labels and (color) legends used in their figure, as they are currently far too small or sometimes missing/unreadable at all. This is a mandatory issue for any final version provided if accepted for publication. Some of the plots in the main text are borderline small, so I strongly recommend finding a solution to make central results more visible. Supplementary Material: I’ve read the supplementary material, however, with reduced scrutiny regarding the experimental details provided. Relation To Broader Scientific Literature: The authors were generally short on introducing related works in Section 5, however, do provide a sound embedding of their work into the context of GCRL. They underline that, according to the best of their knowledge, the paper presents the first to focus on the continuity issue in context of GCRL. I personally cannot judge whether this is true but I also have no counter-example to object their claim. Essential References Not Discussed: The authors mainly discuss the relation to GCRL as applied to model-free policy-gradient and/or Q-value („critic“ function) related research. What is somehow missing is the relation to model-based RL methods, which are also strongly used in connection with multi-objective and goal-conditioned RL tasks, such as - Marc P. Deisenroth and Dieter Fox. Multiple-target reinforcement learning with a single policy. ICML 2011 Workshop on Planning and Acting with Uncertain Models, 2011 - M. Weber, P. Swazinna, D. Hein, S. Udluft and V. Sterzing, "Learning Control Policies for Variable Objectives from Offline Data," 2023 IEEE Symposium Series on Computational Intelligence (SSCI) Other Strengths And Weaknesses: Strengths: ———— The authors find a clear language and easy-to-follow structure to motivate the underlying issue with discontinuity in relation to adjacent goal conditioned control policies. The paper appears well written and succeeds in providing a clear relation of claims and evidence. Comments/Remarks: ———————— l. 255 (right column): If indeed GC-PPO fails entirely at solving the general RL tasks in the mentioned benchmarks, it is questionable to consider any of those results to draw conclusions about the (dis-)continuity properties and total return accumulation (as partly admitted by the authors). Maybe those results should be marked or put into brackets in Table 1, to prevent false conclusions. Other Comments Or Suggestions: I repeat from previous section: The authors must improve the readability of axis labels and (color) legends used in their figure, as they are currently far too small or sometimes missing/unreadable at all. This is a mandatory issue for any final version provided if accepted for publication. Some of the plots in the main text are borderline small, so I strongly recommend finding a solution to make central results more visible. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and careful review of our paper. We address your concerns in the following: `My only remark is that a higher number of random seed varying trials would improve their statistical evaluation of the experimental results, as the current number (5 trials) yields relatively large statistical error bars on the individual KPIs of Table 1.` Thank you for your suggestion. We have increased the number of random seeds from 5 to 10, re-conducted the training and testing, and updated Table 1. The table below presents the results on VVC, showing that the variance of the obtained results has decreased to some extent. |Algorithms|$D_\text{KL}(\pi_{g+\epsilon}, \pi_g)$|$J(\pi_{g+\epsilon}) - J(\pi_g)$|$J(\pi)$| |:-:|:-:|:-:|:-:| |GC-SAC|0.37±0.12|-22.00±5.06|-138.20±14.16| |MSR-GC-SAC|0.33±0.09|-20.94±7.62|-132.09±8.06| |HER|0.65±0.22|-7.91±3.22|-69.21±12.87| |MSR-HER|0.58±0.16|-7.52±3.15|-64.73±13.55| |GC-PPO|0.08±0.08|-44.50±12.09|-169.04±25.57| |MSR-GC-PPO|0.17±0.21|-30.65±20.19|-146.64±28.05| Additionally, we have expanded our discussion to include the statistical significance of the results. Taking $D_\text{KL}(\pi_{g+\epsilon}, \pi_g)$ on VVC as an example, the table below presents the T-test's $\alpha$ values for the evaluation results of policies trained with 5 different random seeds between HER and MSR-HER. All the $\alpha$ values are below 0.05, indicating that MSR-HER can train policies with a smaller $D_\text{KL}(\pi_{g+\epsilon}, \pi_g)$ compared to HER. ||MSR-HER(1)|MSR-HER(2)|MSR-HER(3)|MSR-HER(4)|MSR-HER(5)| |:-:|:-:|:-:|:-:|:-:|:-:| |HER(1)|$2.4*10^{-5}$|$1.2*10^{-4}$|$3.3*10^{-11}$|$5.2*10^{-33}$|$8.5*10^{-13}$| |HER(2)||$7.1*10^{-16}$|$2.6*10^{-14}$|$5.4*10^{-4}$|$2.6*10^{-8}$| |HER(3)|||$2.9*10^{-4}$|$4.0*10^{-2}$|$2.3*10^{-4}$| |HER(4)||||$1.5*10^{-5}$|$2.5*10^{-6}$| |HER(5)|||||$7.6*10^{-7}$| _**Note**: We trained multiple HER and MSR-HER policies with distinct random seeds (numbers in parentheses indicate the seed index), each evaluated over 12100 episodes. As the T-test necessitates i.i.d. data for both groups being compared, and the test outcomes from different seeds do not belong to the same distribution, we can only conduct pairwise comparisons between the policies trained by HER and MSR-HER with varying random seeds._ We are continuing to increase the number of random seeds to obtain more reliable experimental results. Once again, we thank you for your valuable suggestion. `However, the authors must improve the readability of axis labels and (color) legends used in their figure, as they are currently far too small or sometimes missing/unreadable at all. This is a mandatory issue for any final version provided if accepted for publication. Some of the plots in the main text are borderline small, so I strongly recommend finding a solution to make central results more visible.` We sincerely thank the reviewer for highlighting the issue regarding the readability of our figure axis labels and legends. To enhance the readability of all figures, we have taken the following steps: 1. We have increased the font size of all axis labels and legends to ensure they are easily readable. 2. For figures utilizing color legends, we have not only enlarged the text but also ensured that the color distinctions are more pronounced and clearly labeled. ` If indeed GC-PPO fails entirely at solving the general RL tasks in the mentioned benchmarks, it is questionable to consider any of those results to draw conclusions about the (dis-)continuity properties and total return accumulation (as partly admitted by the authors). Maybe those results should be marked or put into brackets in Table 1, to prevent false conclusions.` As suggested, we have marked the results of GC-PPO in Table 1 with italics to indicate that these results should be interpreted with caution due to the baseline algorithm's performance limitations. Additionally, we have included a description in the title to explain this notation. Thank you once again for your valuable insights. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response to my remaining concerns and questions. They have satisfyingly addressed my comments regarding limited statistics and quality of figures and captions. I keep up my recommendation to accept their work (4).
Summary: This paper presents a regularization technique to improve the capabilities of Goal-Conditioned Reinforcement Learning (GCRL) algorithms. The authors start by motivating the need for their approach and presented prelimaries about GCRL. Next, the authors present a cohesive theoretical analysis displaying that modifying a policy toward a goal variant may achieve higher returns, and that the extent of the modification should not be excessive. Following, the authors present the main regularization technique: Margin-Based Policy Self-Regularization. The experiments in Section 4 are conducted across three different settings, two in a robotic control domain and one in aircraft control. Key takeaways include that MSR regularization 1) can enhance cumulative rewards for the policy while reducing the discrepancy between policies, and 2) The importance weight for the MSR objective must be chosen carefully. #### Post-Rebuttal The author's rebuttal includes an example of an improved caption, which will benefit the paper, and a statistical analysis of the results, which will improve the validity of the experimental findings. Both of these improvements should be included in the final manuscript. Claims And Evidence: Yes, the claims are supported by sufficient evidence. Methods And Evaluation Criteria: The proposed methods and evaluation make sense for the problem at hand. Theoretical Claims: Section 3 was read carefully. Proofs in the appendix were not verified in-depth. Experimental Designs Or Analyses: The experimental design within the main paper was read carefully. Supplementary Material: Appendix B was read. Relation To Broader Scientific Literature: This paper presents an important theoretical analysis that can benefit GCRL algorithms and inspire future research in this area. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: + The proposed approach is backed by theoretical analysis. + There is an abundance of results supporting the method. The authors do a good job of displaying the policy discrepancy and cumulative rewards side-by-side. Weaknesses: - Many of the figure captions could be improved. - It would be beneficial to verify that the claims made in Section 4 (that are based on Table 1) are valid with respect to statistical significance. Other Comments Or Suggestions: None. Questions For Authors: Please respond to my weaknesses noted above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable feedback on our work. We address your concerns in the following: `W1: Many of the figure captions could be improved.` We greatly appreciate your suggestions. In accordance with the principles of accuracy and conciseness, we have re-formulated the titles of all figures and tables. Taking Figure 6 as an example, which illustrates the policy discrepancy between adjacent desired goals, $D_\text{KL}(\pi_{g+\epsilon}, \pi_g)$, under various $\beta$ settings, supporting our ablation analysis on $\beta$, we have revised its title from: *"Ablation study on $\beta$. $D_\text{KL}(\pi_{g+\epsilon}, \pi_g)$ between policies for adjacent desired goals. The interpretation of the coordinate axes and the data collection methods are analogous to those described in Fig.1. Results come from MSR-GC-SAC on Reach over 5 random seeds."* to: *"Policy discrepancy between adjacent desired goals, $D_\text{KL}(\pi_{g+\epsilon}, \pi_g)$, under different $\beta$ settings. The meanings of the coordinate axes and the evaluation methods are consistent with those in Fig.1. Results are derived from MSR-GC-SAC on Reach across 5 random seeds."* `W2: It would be beneficial to verify that the claims made in Section 4 (that are based on Table 1) are valid with respect to statistical significance.` We appreciate your suggestion. Taking $D_\text{KL}(\pi_{g+\epsilon}, \pi_g)$ on VVC as an example, we have conducted a statistical significance analysis between HER and MSR-HER. The table below presents the T-test's $\alpha$ values for the evaluation results of policies trained with 5 different random seeds. All the $\alpha$ values are below 0.05, indicating that MSR-HER can train policies with a smaller $D_\text{KL}(\pi_{g+\epsilon}, \pi_g)$ compared to HER. ||MSR-HER(1)|MSR-HER(2)|MSR-HER(3)|MSR-HER(4)|MSR-HER(5)| |:-:|:-:|:-:|:-:|:-:|:-:| |HER(1)|$2.4*10^{-5}$|$1.2*10^{-4}$|$3.3*10^{-11}$|$5.2*10^{-33}$|$8.5*10^{-13}$| |HER(2)||$7.1*10^{-16}$|$2.6*10^{-14}$|$5.4*10^{-4}$|$2.6*10^{-8}$| |HER(3)|||$2.9*10^{-4}$|$4.0*10^{-2}$|$2.3*10^{-4}$| |HER(4)||||$1.5*10^{-5}$|$2.5*10^{-6}$| |HER(5)|||||$7.6*10^{-7}$| _**Note**: We trained multiple HER and MSR-HER policies with distinct random seeds (numbers in parentheses indicate the seed index), each evaluated over 12100 episodes. As the T-test necessitates i.i.d. data for both groups being compared, and the test outcomes from different seeds do not belong to the same distribution, we can only conduct pairwise comparisons between the policies trained by HER and MSR-HER with varying random seeds._ Additionally, we noted the large variance in the results of Table 1. We have increased the number of random seeds from 5 to 10 and re-conducted the training and testing, updating Table 1. The updated table for the VVC shows a reduction in variance for the results. |Algorithms|$D_\text{KL}(\pi_{g+\epsilon}, \pi_g)$|$J(\pi_{g+\epsilon}) - J(\pi_g)$|$J(\pi)$| |:-:|:-:|:-:|:-:| |GC-SAC|0.37±0.12|-22.00±5.06|-138.20±14.16| |MSR-GC-SAC|0.33±0.09|-20.94±7.62|-132.09±8.06| |HER|0.65±0.22|-7.91±3.22|-69.21±12.87| |MSR-HER|0.58±0.16|-7.52±3.15|-64.73±13.55| |GC-PPO|0.08±0.08|-44.50±12.09|-169.04±25.57| |MSR-GC-PPO|0.17±0.21|-30.65±20.19|-146.64±28.05| Once again, we appreciate your suggestion and will include the discussion on statistical significance in our manuscript.
null
null
null
null
null
null
Dimensionality Reduction on Complex Vector Spaces for Euclidean Distance with Dynamic Weights
Accept (poster)
Summary: This paper presents an embedding of a d-dimensional vector x into k dimension such that, for any d-dimensional weight vector w $\sum_{i=1}^d w_i^2 x_i^2$ is preserved. Specifically, the authors give an additive error in terms of $\varepsilon \|x\|_2^2 \cdot \|w\|_4^2$. The classic Johnson-Lindenstrauss guarantee for unweighted vectors, that is, $w_i=1$ for all $i$, has an additive error of $\varepsilon \|x\|_2^2$, which leads to a multiplicative $(1\pm \varepsilon)$ approximation. Traditionally, the entries matrices with Johnson-Lindenstrauss moments are subsampled Rademacher random variables. The authors extend this construction by choosing the embedding entries to be complex numbers, where the sign of the complex part is Rademacher distributed. This sketch is applied to input vectors $x$, and is then given to a decoding algorithm. The construction is novel and the use of complex numbers is clever. After this setup, the analysis is not too difficult, but requires controlling stochastic processes in ways that are not standard. Claims And Evidence: - Methods And Evaluation Criteria: - Theoretical Claims: I have previously reviewed the paper. While there are some changes, the correctness proofs are the same and I still believe the results to be correct. Experimental Designs Or Analyses: There are no experiments. Comparing the performance between the author's embedding and the result by Kaban would have been nice, if only to also see whether the theoretical bounds could be improved or whether there is a gap between linear and non-linear methods. It would have also been nice to see if the performance is sensitive to the sparsity of the JL-matrix used (e.g. if the peformance is the same if a constant fraction of entries are set to 0). Supplementary Material: - Relation To Broader Scientific Literature: I believe that this result is a bit niche. Why a linear method is important for applications still eludes me. In addition, instead of developing a general framework for JL-transforms into a complex space with a range of sparsity/target dimension tradeoffs, the authors give one construction for which it is not clear that the presented analysis is tight. I think a compelling use case for which the authors offer a real improvement over the state of the art, a tight analysis of their method, or alternatively, but unlikely, a separation between linear and non-linear methods, or a general framework for sparse JL transforms into complex spaces would have increased my score substantially. Essential References Not Discussed: - Other Strengths And Weaknesses: I find the ideas quite clever and the technique engaging. Nevertheless, it would have been nice if the result had achieved an optimal error, or if the analysis had been tight, or if the authors had presented a framework for generalizing general JL-embedding matrices, rather than only the complex analogue of dense Rademacher sketches. I understand that it might be difficult to improve the paper in the way that I am suggesting. Alternatively, some experimental focus would have been welcome for a conference like ICML. In the end, I feel this paper barely gets a weak accept. For a higher rating, it would need to address one of the aforementioned points in a satisfactory way. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer 3 for the useful feedback. We provide below answers to reviewer’s comments. --- **There are no experiments. Comparing the performance with the result by Kaban would have been nice, if only to also see whether the theoretical bounds could be improved.** Thank you for the suggestion. Based also on your comment, we added some proof of concept experiments where we show the empirical distribution of the estimates $\rho$ of the weighted norm. Link: www.dropbox.com/scl/fo/lvallzefg646aayc826zv/ALbK6BOw0u5PANsePzCDGsM?rlkey=tns4b16zt86ce5tmjpdxdrbnh In particular, the experiments highlight (Fig 3) the significant reduction of the variance of the estimator when using the technique described in Sect 4.4, which uses sparse block matrices. Notably, this technique introduces sparseness (via decomposing the vector into $L$ sub-vectors) to reduce the variance, and the trade-off lies in ensuring that the number of sub-vectors $L$ is small enough that the norm of each one is roughly $1/L$ of the total norm. We will highlight this tradeoff in the paper. Moreover, the experimental results showcase the applicability of the sparse method (Fig 5), as discussed in Sect 4.3, as well as the poor quality of the estimate for near-uniform vectors for the method described in Thm 1.1 (Fig 2), suggesting that the probabilistic analysis of the method is tight. Finally, from the experiments we indeed see a gap in the quality of the estimate between the linear and nonlinear (Kaban, Fig 1) method, as argued in Sect 4.3. Interestingly, the sparse map with the decomposition into $L$ sub-vectors almost closes the gap with the non-linear method. **Why a linear method is important for applications still eludes me.** Thank you for pointing out this lack of clarity. The main feature of linearity is the ability to compute distances between vectors by simply computing distances between reduced vectors, since $\lVert x - y \lVert \simeq \lVert g(x - y) \lVert = \lVert g(x) - g(y) \lVert$. Based on your comment, we will include the following paragraph to make this clearer. > Indeed, the prime application of the linearity of JL maps is reducing the time complexity of pairwise distance computations [1], which has applications in clustering [2] and nearest neighbor search [3, 7], e.g. for recommender systems and classification. Other examples include compressed sensing [4], which relies on sparse linear measurements to recover an unknown signal. Finally, the linearity of JL has been exploited to speed up least square regression [5] and low-rank matrix approximation [6, 1]. (See answer to Reviewer 1 for the new references [1-7]). **A compelling use case that offers a real improvement over SOTA.** We first observe that our approach represents the first linear dimensionality reduction supporting weighted case. Moreover, we partially address the sparse JL transformation by showing in Section 4.4 that a sparse block structure allows to reduce variance in case of almost uniform vectors. **Provide [...] a tight analysis of their method. I understand that it might be difficult to improve the paper...** Thank you for your suggestion. We would indeed like to find a lower bound, but we were so far unable to obtain one. We conjecture there is some form of information-theoretic lower bound, as arbitrary approximations of the weighted norm would allow to reconstruct the vector at hand. We can however provide a strong hint that our probabilistic analysis is indeed tight. Note that this is not a formal proof. We have, by [1, Example 3.3] and [2, Thm 1.1], that $\lVert \sum_{i=1}^k S_i \rVert_q = \Theta\Big( \big( \sum_{i=1}^k \lVert S_i \lVert_q^q \big)^{1/q} + \big( \sum_{i=1}^k q \lVert S_i \lVert_2^2 \big)^{1/2} \Big) \geq C \sqrt {kq} \cdot W(S_1)$ for some universal constant $C$. We take $x = (1, \dots, 1) \in \mathbb{R}^d$ and $w = (1, \dots, 1) \in \mathbb{R}^d$. Note that $\lVert x \lVert^2_2 \lVert w \lVert^2_4 = d^{3/2}$. Then, in the proof of Lemma A.1, we have that $W(H_{1,1,i}) = |I'_{d,3}| \geq d^{3/2} / 30$ if $d > 3$. This then implies that $\lVert H_1 \lVert_q \geq C \sqrt {kq} \cdot d^{3/2}$ for any $q > 2$, for some universal constant $C$. If we suppose that we can obtain an upper bound on the tail probabilities of the type $\mathbb{P}[|H_1| > t] \leq \exp( - c (t / f(d) )^{\alpha} )$, with $\alpha > 0$, we obtain that $\lVert H_1 \lVert_q \leq C' f(d)$, for some universal constant $C'$ that depends on $q$ but not $d$. Therefore, if we take $f(d)\in o(d^{3/2})$, for large enough $d$ it would violate the condition found above, and we have a contradiction. In conclusion, the dependence of our bounds on $\lVert x \lVert^2_2 \lVert w \lVert^2_4 = d^{3/2}$ seems to be optimal. - [1] Latała. "Estimation of moments of sums of independent real random variables" The Annals of Probability. - [2] Hitczenko et al. "Moment inequalities for sums of certain independent symmetric random variables." Studia Math. --- Rebuttal Comment 1.1: Comment: I think it may have not been clear what I meant by "important for applications" or an improvement over the SOTA. I understand how linear transforms are beneficial in various models. What I wanted to know is whether there exists a problem which uses estimates for the weighted Euclidean norm as a primitive that obtains an improvement via this work. The references that you provide showcase that sketching is important, but none of these applications require the weighted norm. Indeed, for problems like regression or low rank approximation, weighted norms make little sense, at least to me. --- Reply to Comment 1.1.1: Comment: We had misunderstood the "improvement", as our method is the first of its kind. We now see you meant in applications. We sketched in the paper (e.g. lines 28-38) applications of the weighted norms in low rank approx. (e.g. Woodruff et al. 2024), recommender systems (e.g. Wang et al. 2015; Gu et al. 2016) and k-NN classifiers (e.g. [1], and Moreo et al. 2020, Bhattacharya 2017). We indeed agree with you that an important application of our technique would make it more impactful, thanks for the suggestion. We will discuss the following applications, where our technique can yield some improvements: **Weighted least squares** We consider the weighted least squares problem [5, Chapter 6], which provides robust estimators in the presence of uneven reliability in the measurement [6], where given a data matrix $X \in \mathbb{R}^{n \times d}$, an observation vector $y \in \mathbb{R}^n$, and a weight vector $w \in \mathbb{R}^n$ (with $w_i \ge 0$ for all $i$), the goal is to find $\theta \in \mathbb{R}^d$ that minimizes the weighted norm $\lVert X \theta - y\rVert_w^2 = \sum_{i=1}^n w_i^2 (X_i^\top \theta - y_i)^2$. Solving this problem (e.g. by QR decomposition) takes $O(n d^2)$ time. One possible approach when $n \gg d$ is to form the weighted data by defining $X' = WX$ and $y' = Wy$, where $W$ is the diagonal matrix with entries $w_i$, and then to apply a standard Johnson–Lindenstrauss (JL) transform to reduce the problem dimension [7]. As noted in the introduction, this though requires the weights to be known before the dimensionality reductions, and does not allow to solve the problem with multiple weights. In contrast, we sketch the use our weighted JL-like construction as follows. We consider the linear map $g(x)$ that reduces any vector $x$ to a lower-dimensional one and and the norm estimator $\rho(g(x), w)$ given by Theorem 1.1. Then, the reduced data matrix $g(X) \in \mathbb{C}^{k\times d}$ and observation vector $g(y) \in \mathbb{C}^k$ are computed in $O(kdn)$ time, before the weight vector is revealed. Then, the weighted least squares objective $\lVert X \theta - y \rVert_w^2$ is approximated by the weighted norm oracle $F(\theta) = \rho(g(X)\theta-g(y), w)$, which by linearity of $g$ equals $\rho(g(X\theta-y), w)$, and an iterative optimization method (e.g. Nelder-Mead) is used to update $\theta$ based on low-dimensional evaluations of $F(\theta)$, which take $O(kd)$ time each. This allows to solve least squares for several weight vectors, possibly amortizing the cost of computing the reduced representations. **Applications in machine learning** - In [2], the authors propose a time-efficient alternative to the dynamic time warping metric for time series based on the Mahalanobis distance with diagonal covariance matrix, which is the weighted euclidean distance. When the distance is used for classification, a weight vectors is learned on a per-class basis, resulting in a set of different weights vectors. Since the complexity of the distance computation scales linearly with the time series' length, this can become computationally challenging, especially in a steaming setting. Our dimensionality reduction technique allows to compress the time series into reduced vectors, and subsequently compute the weighted distance between such time series, with any given weight vector. - Relatedly to the previous application, in [3] and [4] the authors propose kernel methods where the weighted distance (phrased as a Mahalanobis distance with diagonal covariance matrix) is used as a substitute for the plain Euclidean distance in RBF-like kernels. Our techniques would then allow to apply such kernels on the reduced vectors rather than on the original data, with any weight vectors (e.g. for two different tasks). ---- We also note that in ICML 2019, Lei et al. started the study of data structures supporting nearest neighbor searches on weighed Euclidean metrics, so the field of weighted norms seems to be developing. Finally, we hope that our new experimental results and the discussion on the tightness of our analyis addressed your other concerns (if so, we'd be grateful if you updated your score!). ---- . - [1] R. Paredes and E. Vidal, Learning Weighted Metrics to Minimize Nearest-Neighbor Classification Error. In IEEE TPAMI, 2006 - [2] Prekopcsák, Z., Lemire, D. Time series classification by class-specific Mahalanobis distance measures. Adv Data Anal Classif 6, 2012. - [3] Abe, S. (2005). Training of Support Vector Machines with Mahalanobis Kernels. In: Artificial Neural Networks: Formal Models and Their Applications - [4] Kamada, Y., Abe, S. (2006). Support Vector Regression Using Mahalanobis Kernels. In: Artificial Neural Networks in Pattern Recognition. - [5] Golub, G. H., & Van Loan, C. F. (2013). Matrix Computations. - [6] Fox, J. (2015). Applied Regression Analysis and Generalized Linear Models. - [7] Sarlos, T. Improved approximation algorithms for large matrices via random projections. FOCS 2006.
Summary: This paper explores dimensionality reduction on complex vectors for Euclidean distances. The authors decompose the complex dimensionality reduction into several Rademacher chaos random variables, where novel concentration inequalities for sums of independent Rademacher chaoses are derived. Claims And Evidence: The claims seem to be reasonable. Methods And Evaluation Criteria: There are no experiments. Theoretical Claims: I checked the proofs of some theorems, and they sound correct. Experimental Designs Or Analyses: No experiments are provided. Supplementary Material: I check the supplementary material. Relation To Broader Scientific Literature: This paper provides a theoretical framework for finding linear functions in the case of weighted L2 norm. Essential References Not Discussed: The references are sufficient. Other Strengths And Weaknesses: This paper provides a theoretical framework for finding linear functions in the case of weighted L2 norm. For weaknesses, please refer to the problem below. Other Comments Or Suggestions: Few experiments should be provided. Questions For Authors: 1)The title contains dynamic weights. It seems that the authors do not mention them in detail in this paper. 2)The weighted L2 norm is explored. It is meaningful to study the general norm. 3) Introducing complex vector spaces is interesting. What is the effect of the complex vector spaces when devising specific algorithms? 4) It would be much better to use toy examples to show the probability bound. Maybe a specific example for the privacy-preserving similarity search should be given. 5) What is the near-uniform vector? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer 2 for the useful feedback. We provide below detailed answers to comments. ---- **Few experiments should be provided.** Thank you for the suggestion, we added some proof of concept experiments where we show the empirical distribution of the estimates $\rho$ of the weighted norm. See comment of Reviewer 3 for details. Link: www.dropbox.com/scl/fo/lvallzefg646aayc826zv/ALbK6BOw0u5PANsePzCDGsM?rlkey=tns4b16zt86ce5tmjpdxdrbnh **The title contains dynamic weights but are not mentioned in detail** Based on your feedback, we will expand the introduction (e.g. lines 28-38, second column) with the following sentences and references to highlight the practical relevance of weighted norm queries where the weights can change dynamically from one query to another. > However, weights might change at each query, or they might not known at preproccessing time, a setting that we hereinafter refer to as “dynamic weights”. Examples include nearest neighbor methods [1] for classification or recommender systems [2], where the relative importance of features might depend on the type of query at hand. Weights might be estimated on highly dynamic datasets which require a frequent update of the parameters (e.g., (Banihashem et al., 2024)). - [1] Indyk, Piotr, and Rajeev Motwani. "Approximate nearest neighbors: towards removing the curse of dimensionality." STOC 1998. - [2] Baumgärtner, Tim, et al. "Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking." EMNLP 2022. **What is the effect of the complex vector spaces?** The intuition on why a complex space is useful is that it allows to remove the bias in the estimator for $\| x \|^2_w$ when we have terms in it with three or more multiplicands. More formally, the key observation, which can be found in Theorem 4.1, is that the contribution of $x_{j_1} x_{j_2} w^2_{j_3}$ in the estimator is multiplied by the random variable $A_{i, j_1} A_{i, j_2} A^2_{i, j_3}$. Since each entry of $A$ is iid in $\{\pm 1, \pm i\}$, we have that the r.v. is a uniform on either $\{\pm 1, \pm i\}$ or on $\{\pm 1\}$ when any indices of $j_1, j_2, j_3$ are distinct (and thus it is $0$ in expectation), and the constant $1$ when $j_1, j_2, j_3$ are all the same, since $\pm 1^4 = \pm i^4 = 1$. Then, in expectation only the $x_{j} x_{j} w^2_{j}$'s contribute to the estimator. **It is meaningful to study the general norm.** This is a very intriguing remark. We indeed agree that exploring norms beyond the $L_2$ would be of great interest, albeit not trivial: we believe that the approach can be generalized to $L_p$ by using roots of unit of higher order. Based on your comment, we will introduce the following remark to guide future work in this direction. >We conjecture that the map to the complex plane we provide here admits a generalization to the estimation of arbitrary (weighted) $L_p$ norms. Indeed, let $h = \sum_{j_1, \dots j_p}\prod_{k=1}^p A_{j_k}x_{j_k}$. If $A$ is a uniform on the $p$-th roots of $1$, we have that $\mathbb{E}[\prod_{k=1}^p A_{j_k}] = 1$ if all the $j_k$'s are the same, and $0$ otherwise. Therefore, $\mathbb{E}[ h ] = \| x \|_p^p$. However, the variance of the estimator grows with $p$, and novel approaches should be developed in order to reduce this growth. **Use toy examples to show the probability bound and the privacy-preserving similarity search** We added some proof-of-concept experiments on some toy data to show the distribution of the estimator. See the reply to the first comment and Reviewer 3. An example of privacy-preserving similarity search is the following: release a dataset, so that a user can detect if there exists a point $x$ within distance $r$ from a given query $q$, but no information on the exact values of $x$ are released. JL provides a solution under differential privacy (Blocki et al, FOCS 12). We believe that an interesting extension would be to release a dataset so that a user can detect if there are points within the desired weighted norm without revealing the exact point coordinates. Indeed, the variance of the estimates would avoid the reconstruction of the original vectors. We plan to add the following sentence (at line 434, second column): >Finally, we believe that our constructions can be of interest for privacy preserving similarity search as it might allow the release of datasets that allow users to detect if there are near points within the desired weighted norms without releasing the details of the vector, similar to what done in (Blocki et al, FOCS 12) for JL. **What is the near-uniform vector?** A uniform vector is a vector where all entries are the same. A near uniform vector is a relaxation of this condition, where splitting the vector into contiguous parts results in sub-vectors with roughly the same norm. We give a formal definition in the beginning of Section 4.4, and we will insert the informal description above in the paper to clarify it.
Summary: The main result of the paper is the following (Theorem 1.1): Let $\epsilon, \delta \in (0,1)$ and $\Delta \ge 0$ be given parameters. There is a function $g : \mathbb{R}^d \rightarrow \mathbb{R}^{O(\Delta^2\log(1/\delta)/\epsilon^2)}$ and an estimator $\rho(g(x), w)$ such that for any vectors $x,w \in \mathbb{R}^d$, with prob. $1-\delta$ $$ \left | \left(\sum_{i=1}^d x_i^2 w_i^2 \right) - \rho(g(x),w) \right| \le \epsilon \cdot \|x\|_2^2 \cdot \|w\|_4^2/\Delta. $$ The authors seem to have missed the fact that the something similar to the main theorem easily follows from a one line calculation from the standard Johnson-Lindenstrauss lemma. Let me give the full proof below: Let $\Pi: \mathbb{R}^d\rightarrow \mathbb{R}^{O(\log(1/\delta)/\epsilon^2)}$ be a JL map. It is well known that for any vectors $a,b$, with prob. $1-\delta$, we have $$ | \langle \Pi a, \Pi b \rangle - \langle a, b\rangle | \le \epsilon \|a \|_2 \|b\|_2.$$ (This is a standard exercise, e.g. see hw 1 here: https://rajeshjayaram.com/homeworks/hw2.pdf). Now apply this to the vectors $x' = [x_1^2, \ldots, x_d^2]$ and $w' = [w_1^2, \ldots, w_d^2]$. We have $\sum_{i=1}^d x_i^2 w_i^2 = \langle x', w' \rangle$ so $$ \left | \left(\sum_{i=1}^d x_i^2 w_i^2 \right) - \langle \Pi x', \Pi w' \rangle \right| \le \epsilon \cdot \|x'\|_2 \|w'\|_2 \le \epsilon \|x\|_2^2 \| w\|_2^2. $$ Then we can scale $\epsilon$ by any $\Delta \ge 0$ to get an error guarantee of $\epsilon \|x\|_2^2 \|w\|_2^2/\Delta$ by projecting $x$ onto $O(\log(1/\delta) \Delta^2 /\epsilon^2)$ dimensions. This version is slightly weaker than the main theorem since we replace the $4$ norm with $2$, but better in the sense that we don't have to take the max over two dimensions that is done in Theorem 1.1. However, I can't imagine a situation where replacing the $4$ norm by $2$ would play a big role. This argument also has the benefit that it fits into a few lines with known tools rather than many pages with complicated analysis. Furthermore this construction seems easier to implement and is arguably much more practical than the "radamacher chaos" based construction of the paper. Update: After reading the rebuttal, I have updated the score. Claims And Evidence: Yes, the proofs seem correct although quite complicated. Methods And Evaluation Criteria: No empirical results. Theoretical Claims: The proofs seem correct but the calculations are long so I was not able to follow them directly. However, the simpler argument I gave above is immediate. Experimental Designs Or Analyses: No experiments. Supplementary Material: No. Relation To Broader Scientific Literature: None. Essential References Not Discussed: None. Other Strengths And Weaknesses: Are there any realistic scenarios where the construction of the paper is to be preferred over the simpler construction outlined above? Other Comments Or Suggestions: None Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are very confused by this review, and we believe that it might be due to a misunderstanding. The summary that the reviewer provides is misleading and incomplete, as in his re-formulation of our main result, Theorem 1.1, they crucially omit that the function $g$ is linear. Because of this, the reviewer then describes the construction of a nonlinear function, which is a fundamentally different goal to the one we pose in this paper (e.g., because it does not allow to compute distances between vectors). In fact, we indeed explicitly address the proposed nonlinear construction in the Introduction section (lines 75-92 first column, e.g. Equation 1) of our submission. We remark that linearity is a desiderable property also in practical scenarios. In order to make this even more clear to the reader, we will add the following sentences after line 92 in the introduction. > Indeed, the prime application of the linearity of JL maps is reducing the time complexity of pairwise distance computations [1], which has applications in clustering [2] and nearest neighbor search [3, 7], e.g. for recommender systems and classification. Other examples include compressed sensing [4], which relies on sparse linear measurements to recover an unknown signal. Finally, the linearity of JL has been exploited to speed up least square regression [5] and low-rank matrix approximation [6, 1]. We also remark that in Section 4.4 we provide new results that did not appear in the previous version of the paper. We indeed show that by using a sparse blocked matrix multiplication it is possible to improve the quality of the results with some constraints on the input vectors. We then kindly ask the reviewer to re-evaluate the paper, in light of this clarification of our results. ----- References: - [1] Cunningham, John P., and Zoubin Ghahramani. "Linear dimensionality reduction: Survey, insights, and generalizations." The Journal of Machine Learning Research 16.1 (2015): 2859-2900. - [2] Makarychev, Konstantin, Yury Makarychev, and Ilya Razenshteyn. "Performance of Johnson-Lindenstrauss transform for k-means and k-medians clustering." Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing. 2019. - [3] Indyk, Piotr, and Rajeev Motwani. "Approximate nearest neighbors: towards removing the curse of dimensionality." Proceedings of the thirtieth annual ACM symposium on Theory of computing. 1998. - [4] Upadhyay, Jalaj. "Randomness efficient fast-johnson-lindenstrauss transform with applications in differential privacy and compressed sensing." arXiv preprint arXiv:1410.2470 (2014). - [5] Yang, Tianbao, et al. "Fast Sparse Least-Squares Regression with Non-Asymptotic Guarantees." arXiv preprint arXiv:1507.05185 (2015). - [6] Ghojogh, Benyamin, et al. "Johnson-Lindenstrauss lemma, linear and nonlinear random projections, random Fourier features, and random kitchen sinks: Tutorial and survey." arXiv preprint arXiv:2108.04172 (2021). - [7] Ghalib, Abdulaziz, et al. "Clustering and classification to evaluate data reduction via Johnson-Lindenstrauss transform." Future of Information and Communication Conference. Cham: Springer International Publishing, 2020. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the feedback. I can see now that linearity is a property that is not preserved by my embedding and it is an important property to have in practice. However, it still seems believable that a much simpler analysis could obtain the same bounds that I proposed, but with linearity. In particular, there is existing prior work on analyzing JL type lemmas with complex weights (e.g. see lemma 4 here http://www.cs.cmu.edu/afs/cs/user/dwoodruf/www/knpw11.pdf). I haven't had time to work out the calculations, but lemma 4 in the link is just saying using roots of unity gives us the same moment bounds (and hence concentration) as the standard JL analysis. It is worth thinking if this lemma and the work in that paper (which the authors seem to have missed) already handles much of the technical work of the paper or simplifies the analysis more. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We are confident that highlighting the linearity of the method, also thanks to your comment, will make our goal clearer, and it will ultimately make for a better paper. Thanks also for the new reference, we will add it in the related work and discuss the similarities with our method, namely using complex numbers to to remove bias from estimators. However, this paper has major differences with our approach. In particular, we observed the following: - The approach of Kane et al. uses a more complicated estimator, that seems to use complex numbers in a fundamentally different way from ours, as they use the $r$-th roots of unity, with $r \in O(\log(\epsilon^{-1}))$. Our techniques use the $4$-th roots of unity, since we have interactions between 4 terms in our estimators (two $x_j$'s and two $w_j$'s), and $\epsilon$ only enters in the dimension of the reduced vectors. Therefore, the role of the roots of unity in the method seems to be different, and the analysis cannot be carried over trivially. - Moreover, their approach estimates the norm $\lVert x \lVert_p$, but **only** for the range $p \in (0,2)$. Our approach can be extended with small modifications to estimate the $\lVert x \lVert_4$ norm by taking $w =x$. The approach can be extended to estimate the $p$-th norm by taking $p$-th roots of unity, albeit with worse concentration (see also reply to reviewer 2). Therefore, it seems unlikely that the methods by Kane et al. would generalize to weighted norms without major modifications to both the method and the analyis (although this is an interesting research avenue, which we will mention). Furthermore, their Lemma 4 doesn't seem to be applicable directly in our case: - If their Lemma 4 were used to bound our term $\sum_{j_1, j_2, j_3} A_{j_1}A_{j_2}A^2_{j_3} x_{j_1} x_{j_2} w_{j_3}^2$, using $A_{j_1}A_{j_2}A^2_{j_3}$ as their $z_j$ and $x_{j_1} x_{j_2} w_{j_3}^2$ as their $x_j$, it would fail because the $z_j$'s are **not** independent. This is why we resort to Bonami's hypercontractive inequality to bound the tails of the Rademacher chaos, rather than a Khintchine-like inequality. We will remark this in the paper. - A different application of Lemma 4 could be the following: setting $x'$ (the $x$ of Lemma 4) as our $x \odot w$, would yield that $\mathbb{E}[|\langle x', z\rangle | ^t] \leq f(t) \lVert x \rVert_w^2$. This however doesn't seem to generalize to weighted norms in our setting. Indeed, this approach requires knowing $w$ at the time of the dymensionality reduction, and does not allow to perform multiple queries with different weights. We agree with you that a simpler method, or a simpler anlysis of the existing method could exist. However, it is far from trivial to obtain this. We provide some examples of suboptimal approaches below. - Using existing JL approaches either loses linearity (as discussed before), or requires the weights $w$ to be fixed and known beforehand (as discussed in lines 55-65 of the introduction). - Using variance-based bounds instead of the Orlicz-norm-based tail bounds (that are one of the technical contribution of our paper) yields and analyisis that is equally complicated, as it involves a Rademacher chaos of order 8. Moreover, we conjecture that it does not yield strong enough concentration to only use averaging in the estimator, and would require using also the median trick, complicating the estimator. We can add a remark about this in the paper. - Using Hoeffing's inequality to obtain exponentially decaying tail probabilility bounds introduces a dependency on the $L_1$ norm of $x$, which scales linearly with $d$ and yields to suboptimal results. Indeed, the Orlicz-norm-based analysis yields results that seem to be tight in $d$ (see reply to Reviewer 3). We hope that we have addressed all your concerns!
null
null
null
null
null
null
null
null
Projection Optimization: A General Framework for Multi-Objective and Multi-Group RLHF
Accept (poster)
Summary: This paper primarily focuses on aligning large language models (LLMs) to multiple objectives using per-objective preference data. Prior works on this topic primarily aim to achieve Pareto optimal alignment across all objectives by linearly aggregating all objectives into a single unified form and optimizing the unified objective. In this work, they aim to propose an efficient strategy to optimize for the non-linear aggregation of the multiple objectives. Moreover, they consider a multi-group setting wherein each group has a different level of importances over the objectives. They propose to use the multi-objective p-norm aggregation framework from the social choice theory that leads to general function classes (non-linear) and also account for the group preferences in this aggregation framework. Instead of training a policy that optimizes for this aggregated objective, the authors propose an alternative approach: training a policy that minimizes the distance between its expected reward vector and a target set $W$. This target set consists of all vectors whose aggregated p-norm value exceeds a certain constant $c$. Moreover, they also consider the case where each group has a different target set $W$. Then the policy optimization aims to minimize either (i) the distance between the expected reward vector incurred by the policy and the intersection of all target sets across groups (consensus setting) or (ii) minimize a weighted average of the distances between the expected reward vector incurred by the policy and the target set of each group (malfare setting). The proposed algorithm consists of two key steps. First, it computes the projection direction between the expected reward vector and the target set. Then, using this direction vector as weights over the objectives, it optimizes a weighted loss using two approaches: reward-free multi-objective optimization and reward-based multi-objective optimization. The algorithm iteratively alternates between computing the direction vector and updating the policy. The authors also provide convergence guarantees for both the consensus and malfare settings. Finally, they propose an online variant of the algorithm, where responses are collected using the current policy, and preferences are gathered based on the objective with the greatest difference. The importance weights over objectives are estimated online using a likelihood-based approach. For experiments, they fine-tune the Llama2-7B model on Anthropic-HH data using the offline version of the proposed approach for $p=0.5$ and $p=-\infty$ and compare the performance against relevant baselines. Claims And Evidence: The paper primarily builds upon the target set approach from [Yu et al., 2021] which aims to minimize the distance between the expected reward vector $ S(\pi) $ incurred by the policy and the set of acceptable reward vector values whose aggregation exceeds a predefined constant. However, [Yu et al., 2021] was designed for a reinforcement learning (RL) setting where the initial state is fixed (see Page 4 - Interaction Protocol), resulting in a single $ S(\pi) $ that can be directly compared with the target set. In contrast, in this RLHF setting, optimization is typically performed over a distribution of prompts, where the objective is to maximize the expected reward across prompts. This key aspect is not addressed in the paper. Furthermore, according to the objective defined in Section 3.1, where the expectation over prompts is taken outside the aggregation, the aggregated reward for each prompt should either satisfy the target set constraint or be close to it. However, the feasibility of this approach and whether it is actually implemented in the experiments remain unaddressed. To the best of my knowledge, this presents a fundamental issue that requires clarification from the authors. ## update after rebuttal My concerns regarding having multiple algorithms are addressed. Essentially, there are only 2, as the authors point out, and only one of them (the offline version) has been evaluated. I still believe the paper can be restructured quite a bit to improve the clarity. Kindly recommend the authors include additional experiments and more experimental discussions in the paper. Also, improve the introduction such that $ S(\pi) $ is well defined, as pointed out. Methods And Evaluation Criteria: The experimental evaluation is quite limited. Only one of the proposed algorithms has been evaluated, with minimal discussion of the results. Furthermore, the online version of the algorithm lacks any experimental evaluation. Theoretical Claims: In the theoretical proofs, $ S(\pi) $ is treated as the expectation over both prompts and policy, with the bounds based on the distance between the expected reward vector over all prompts and the target set. However, optimizing the expected aggregated reward, as detailed in Section 3.1, is not equivalent to ensuring that the aggregation of the expected reward vector satisfies the constraints (or is close to the target set of vectors that satisfies them). This is equivalent to interchanging expectation and aggregation and requires proper comments from the authors discussing the validity of such a maneuver. Experimental Designs Or Analyses: The experimental evaluation is quite limited. Only one of the proposed algorithms has been evaluated, with minimal discussion of the results. Furthermore, the online version of the algorithm lacks any experimental evaluation, and details regarding the AR baseline are not discussed. Supplementary Material: The appendix is quite detailed and provides comprehensive proofs for all the stated theorems in Section B. However, the analysis appears to consistently rely on the expected reward vector over the prompts. Relation To Broader Scientific Literature: This paper primarily focuses on aligning large language models (LLMs) to multiple objectives using per-objective preference data. Prior work in this area mainly seeks to achieve Pareto-optimal alignment by linearly aggregating all objectives into a single unified form and optimizing the resulting objective. In contrast, this work proposes an efficient strategy to optimize for the non-linear aggregation of multiple objectives. Moreover, the authors consider a multi-group setting where each group assigns different levels of importance to the objectives. They adopt the multi-objective $ p $-norm aggregation framework from social choice theory, which generalizes to non-linear function classes and incorporates group preferences into the aggregation process. To the best of my knowledge, the proposed methodology of minimizing the reward vector's distance to the target set represents a novel framework in this area of research. Essential References Not Discussed: The following works related to group preference and multi-objective alignment are missing in related works: 1. Zhao, Siyan, John Dang, and Aditya Grover. "Group preference optimization: Few-shot alignment of large language models." arXiv preprint arXiv:2310.11523 (2023). 2. Ramesh, Shyam Sundhar, Yifan Hu, Iason Chaimalas, Viraj Mehta, Pier Giuseppe Sessa, Haitham Bou Ammar, and Ilija Bogunovic. "Group robust preference optimization in reward-free rlhf." Advances in Neural Information Processing Systems 37 (2024): 37100-37137. 3. Guo, Yiju, Ganqu Cui, Lifan Yuan, Ning Ding, Zexu Sun, Bowen Sun, Huimin Chen et al. "Controllable preference optimization: Toward controllable multi-objective alignment." arXiv preprint arXiv:2402.19085 (2024). 4. Mukherjee, Subhojyoti, Anusha Lalitha, Sailik Sengupta, Aniket Deshmukh, and Branislav Kveton. "Multi-Objective Alignment of Large Language Models Through Hypervolume Maximization." arXiv preprint arXiv:2412.05469 (2024). Other Strengths And Weaknesses: Strengths: 1. This paper focuses on aligning large language models (LLMs) to multiple objectives using per-objective preference data. It proposes an efficient strategy for optimizing the non-linear aggregation of multiple objectives. Moreover, it considers a multi-group setting where each group assigns different levels of importance to the objectives. To the best of my knowledge, the proposed methodology of minimizing the reward vector's distance to the target set represents a novel framework in this area of research. 2. Multiple algorithms have been proposed for both offline and online settings, with theoretical analysis establishing their convergence guarantees. Weaknesses: 1. See comments above regarding the theoretical claims and experimental evaluation/design. 2. The paper is overly dense and difficult to follow in many places. It introduces numerous algorithms and concepts, but not all have been experimentally validated. Other Comments Or Suggestions: Should it be $\theta_i$ instead of $\theta$ in (3)? What is $\Pi_{W}(V)$? Questions For Authors: Kindly comment on the issues in the theoretical claims detailed above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your detailed review! We are happy to address your questions as follows. > 1. The discussion of the expectation over prompt Thanks for the good question! Our goal is to find the model $\pi$ that maximize the total expected reward $u(\pi)=\sum_{i=1}^m(\mathbb{E}\_{x\sim \rho, y\sim \pi}[r_i(x,y)])^p, $ where the aggregation is applied to the **expected reward $u_i(\pi) = \mathbb{E}\_{x\sim \rho, y\sim \pi}[r_i(x,y)]$**, rather than the reward for each prompt. In practice, the utility function (i.e., performance) of the language model is always measured over the entire prompt distribution, and our aggregation is applied to this overall utility. We acknowledge that our introduction to multi-objective learning (namely, Eq.(1), (2) and the beginning of Section 3.1) may have been unclear and misrepresented our motivation, potentially leading to this misunderstanding. We can clarify it as follows: For a language model $\pi$ (and single group), the utility for the model and the objective $i$ and the group $n$ is defined as $u_i(\pi) = \mathbb{E}\_{x\sim \rho, y\sim \pi}[r_i(x,y)]$, and the model aims to maximize the aggregated social welfare $(\sum_{i=1}^m \alpha_i (u_i(\pi))^p)^{1/p}$. To make this problem more tractable, we try to minimize the distance between target set and current expected reward vector $S(\pi) = (u_1(\pi), u_2(\pi), \cdots, u_m(\pi))$ (ignore the regularizer for clarity), which is a reasonable alternative. (See Example 3.1 and 3.2 in our paper). >2. The experimental evaluation and discussion We conduct our experiments on the Anthropic-HH dataset using the objectives Helpful, Harmless, and Humor, which are popular choices in previous MORLHF papers. We provide comprehensive results across different weights and objectives in both single-group and multi-group settings. Additionally, we include further experiments in the additional experiment part. We also provide an analysis for hyperparameter selection (the selection of $c$) in response to Reviewer MMQx. The experimental results demonstrate that our approach achieves a smaller distance between the target set and the expected reward vector, and also **achieves balance between diverse objectives**. For example, with the weight (0.7,0.3), objectives 'helpful' and 'harmless', and the aggregation parameter $p=0.5$, MOPO get the utility $u_{MOPO}=(0.123,0.412)$, while $u_{RS}=(1.203,-0.707), u_{MOD}=(1.295, -0.8)$. This shows that MOPO not only achieves a smaller distance to the target set but also better balances multiple objectives, aligning with our theoretical motivation of striking a balance between linear aggregation and max-min RLHF. Since the offline RLHF is more common, practical, and convenient (we can access offline dataset and reward-model easily), we only perform our method in offline RLHF. We believe that the online MOPO can also achieve better expected utility and better performance compared to linear aggregation, and we leave it as future work. The AR baseline means that we directly aggregate the reward to $r(x,y) = (\sum_{i=1}^n \alpha_i r_i^p(x,y))^{1/p}$, and train the reward directly using PPO. Since it requires aggregating the individual reward, it cannot be applied to the negative reward setting, while our algorithm can also work for the negative reward. In the experiment, we perform $r_i(x,y) = \max\{r_i(x,y),0\}$ to change the reward into non-negative. >3. The paper is overly dense in many places. Thanks for the reminder. We will improve our writing and proof to make it easier to follow in our final version. >4. Additional experiments. We also perform experiment results on the Summarize-from-Feedback dataset. We fine-tune ALPACA-7B for objectives Summary and Faithful and get two policies $\pi_1, \pi_2$. Then, we perform experiments on these two policies. We use $\alpha=(1/2,1/2)$ and $p=0.5$ for the aggregation, choose $c$ adaptively by $c=u(\pi^t) + 0.01$ where $u(\pi^t)$ is the expected utility for $\pi^t$ at round $t$ (See Response for Reviewer MMQx for the selection of $c$). The results are shown in the following table, which shows that our MOPO performs better than RS and MOD. | $\alpha$ | MOPO | RS | MOD | |-----------|-------|-------|-------| | (0.3,0.7) | 0.521 | 0.515 | 0.507 | | (0.5,0.5) | 0.522 | 0.517 | 0.502 | | (0.7,0.3) | 0.549 | 0.552 | 0.530 | >5. Typos and definitions Thanks for the reminder. Eq. (3) should be $\theta_i$ instead of $\theta$, and $\Pi_W(V)$ means the projection of the point $V$ on the set $W$, which can be written as $\Pi_W(V) = \arg \min_{x \in W} \|\|V-x\|\|_2^2$. >6. Missing references Thank you for providing additional relevant references! We will include a detailed discussion in our next version. Thanks again for your time and effort! We will be happy to answer any further questions you may have. --- Rebuttal Comment 1.1: Comment: Thank you for running additional experiments and clarifying the theoretical questions. I also appreciate the further discussion on experiments. However, my concern is still that 4 algorithms have been introduced/proposed in the paper and only one has been evaluated. If the online algorithms are not the primary focus and are not evaluated, they could have been moved to the appendix. Instead, further experimental details and discussion could have been included in the main paper. As it stands, the paper has only half a page of experimental discussion, which is very uncommon in ML papers in the area of LLMs. Further, as the authors themselves agree, the motivation and introduction to their approach need major revision. For the above reasons, I believe the paper needs major restructuring, and I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you so much for the response and we are happy to further clarify the concerns. Since the primary contribution of our paper lies in the theoretical analysis, we believe the online setting—particularly the incorporation of importance weight estimation into the theoretical framework—is crucial for addressing the challenge of an unknown $\alpha$. Also, although our paper includes several algorithmic components, Algorithms 1 and 2 are merely subprocedures within the overall MOPO framework. Actually, we only have two algorithms: offline MOPO and online MOPO. We chose to evaluate offline MOPO because it can be easily compared to the previous work and show the effectiveness of our projection-based optimization approach, while the online MORLHF has been limitedly studied in previous work. If you consider MOP-RB and MOP-RF in Algorithm 3 as two distinct algorithms, it’s important to note that, in practice, we typically ignore the exploration term. Then, the key difference between MOP-RB and MOP-RF only lies in whether we derive the optimal policy for each objective using reward-based algorithms like PPO or DPO. However, this derivation is not the central focus of our work. The most important aspect of our paper is the combination of the optimal policies, rather than the process of deriving them. In fact, we conduct experiments across different datasets, multiple objectives, and various LLMs to demonstrate the effectiveness of the proposed MOPO algorithm. Regarding the motivation and introduction, we believe the misunderstanding can be resolved through improved writing, rather than requiring major revisions. Our motivation is to apply a more general form of aggregation, rather than relying solely on linear aggregation, in the MORLHF setting. It is natural to aggregate the expected reward $u_i(\pi)$, as it is common for evaluating an LLM's performance. We will improve our writing to avoid this confusion in the next version. We hope our response has helped further address your concerns, and we would appreciate it if our response could be considered in your final recommendation.
Summary: This paper introduces a novel Multi-Objective RLHF (MORLHF) framework that leverages per-objective preference feedback to achieve Pareto optimality by aggregating multiple objectives into a single unified optimization target. Unlike existing approaches that rely on linear aggregation, this work overcomes their limitations by reformulating the non-linear aggregation maximization problem into a series of sub-problems, each involving only linear aggregation. This decomposition enhances computational efficiency. The framework is further extended to handle multi-group settings and scenarios with unknown objective weights. Experimental results demonstrate a clear advantage of the proposed method over benchmark approaches. ## update after rebuttal: Thanks for the rebuttal. I will keep my original rating. Claims And Evidence: Yes. The claim is supported by evidence in theorem and experiments. Methods And Evaluation Criteria: Yes Theoretical Claims: See my questions. Experimental Designs Or Analyses: Yes, the experiments are good. Supplementary Material: I checked some parts of the supplement. Relation To Broader Scientific Literature: The Multi-Objective RLHF is an interesting topic and has received increasing attentions recently. Essential References Not Discussed: See my questions. Other Strengths And Weaknesses: See my questions. Other Comments Or Suggestions: See my questions. Questions For Authors: 1. The multi-group problem considered in this paper is closely related to reinforcement learning from human feedback (RLHF) with heterogeneous feedback, where individuals may provide varying feedback based on their expertise or background. However, the paper assumes that the type/group of each human is known in advance and that the reward of each objective remains invariant across groups. In practice, this group information may be unknown. Lee et al. (2024) address this challenge by learning a latent representation of heterogeneous human feedback. Could the authors discuss potential ways to relax this assumption? Lee et al. (2024), *Low-Rank Contextual Reinforcement Learning from Heterogeneous Human Feedback*, https://arxiv.org/abs/2412.19436. 2. Given that there are $m$ different objectives and $N$ distinct human groups, does the framework assume that each human provides pairwise feedback for all $m$ objectives? If so, this may be a strong assumption. In practical online interaction settings, when asking a human to compare two responses, it may be impractical to request comparisons across all specified objectives. 3. Building on the previous question, the assumption that $m$ objectives are explicitly available may not always hold in practice. For example, when soliciting human feedback on two responses to a prompt, we do not necessarily know which specific objectives a human considers when making their judgment. How does the proposed framework address this challenge? 4. Assumption 5.3 requires a constant gap between the reward functions of two provided responses, which appears to be a strong assumption. In practice, this gap may decrease over time as the learning process progresses—i.e., as training advances, both responses may improve, leading to a diminishing difference in rewards. Would it be possible to model this gap as a decaying function of the learning horizon $T$ or other parameters? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your positive response and time in reviewing our paper! We will address your questions as follows. >1. The paper assumes that the type/group of each human is known in advance. In this paper, we assume the group information is known. However, if the group information is unknown, we can use EM or other clustering algorithms to first get an estimation of the group indexes. This approach is also used in previous work [1]. >2. Does the paper assume that human provides pairwise feedback for all objectives? Yes. We assume that humans can give feedback for all objectives. However, this is not a strong assumption. In fact, it is a common assumption that we have feedback or the reward model for all objectives in previous MORLHF works [2,3,4,5]. Also, in our online setting, we do not need the human to give all objectives for one response. Instead, we only need users to give pairwise feedback on the objectives they consider most important. Specifically, we ask them to first select the most important objective and provide a comparison between them. The selection of the objective is modeled by a softmax distribution based on the weighted reward gap. $\alpha_i \cdot (r_i(x,y_1)-r_i(x,y_2))$ (See Line 392 in our paper). >3. The assumption that objectives are explicitly available may not always hold in practice. That's a great question! One possible way to address this challenge is to require a "reason" alongside the human's response. This reason could be a property or a short explanation, such as "helpfulness" or "Response 2 is too long". When the model encounters a reason that differs from the previous ones, it can identify this reason as a potentially new factor and add it as a new objective. This is an exciting and meaningful direction for future research, which we leave for future work. 4. The assumption 5.3 seems too strong. In fact, the distribution of $y_1$ and $y_2$ will not be close as training advances: $y_1\sim \pi^*(y\mid x)$, which is the optimal policy, while $y_2\sim \pi_{ref}(y\mid x)$ only follows the reference policy (typically the policy after SFT). Therefore, we simply assume that the performance gap between the optimal human-aligned policy and the reference policy remains constant, which is generally a reasonable assumption in practice. Thanks again for your time and effort! If you have more questions, we are happy to solve them. [1]. Chakraborty et al. 2024: MaxMin-RLHF: Alignment with Diverse Human Preferences [2]. Shi et al. 2024: Decoding-Time Language Model Alignment with Multiple Objectives. [3]. Yang et al. 2024: Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment. [4]. Wang et al. 2024: Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts [5]. Mukherjee et al. 2024: Multi-Objective Alignment of Large Language Models Through Hypervolume Maximization
Summary: This paper introduces a projection-based optimization framework for Multi-Objective Reinforcement Learning with Human Feedback (MORLHF). The approach reformulates non-linear reward aggregation as a series of linear sub-problems, enabling computationally efficient Pareto-optimal solutions. The framework is extended to multi-group settings, allowing for consensus or aggregated utility optimization across diverse preference groups. Claims And Evidence: The authors argue that existing MORLHF methods, which rely on linear aggregation, fail to address fairness and multi-group preferences effectively. These claims are supported by theoretical sublinear regret guarantees for both offline and online settings and empirical results demonstrating improved fairness and competitive performance against baselines like RS, MOD, and max-min RLHF. Methods And Evaluation Criteria: Yes, it makes sense. Theoretical Claims: This paper provides a relatively comprehensive proof for its theoretical statements. Experimental Designs Or Analyses: The experimental design in this paper is not comprehensive enough. 1. The lack of analytical experiments on multiple LLMs may limit the universality of the findings, and the experimental verification of the framework for solving multi-objective problems is limited to only three objectives, which raises uncertainty about its potential extension or wider application. 2. While the experiments validate the framework’s effectiveness, they lack an analysis of hyperparameter impacts. 3. Although the authors emphasize computational efficiency, they fail to provide a direct cost comparison with state-of-the-art LLM models or nonlinear aggregation baselines. Supplementary Material: I have reviewed a portion of the supplementary material. Relation To Broader Scientific Literature: The paper situates itself within RLHF and multi-objective RL literature, and proposes multi-objective and multigroup RLHF problems under non-linear aggregation. It also connects to social choice theory and Blackwell approachability. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. Efficient solution to address Multi-Objective and Multi-Group RLHF Problems Under Non-Linear Aggregation. 2. The theoretical derivation is comprehensive. Weaknesses: 1. While the experimental results demonstrate strong performance, the scalability of the proposed method remains uncertain, as it was solely tested on one LLM and three objectives. 2. Lack of guidance on choosing hyperparameters for practitioners. Other Comments Or Suggestions: Providing a code library would assist other researchers in validating and extending the work. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your positive response and time in reviewing our paper! We will address your questions as follows. 1. Analysis of hyperparameter impacts and selection. Since $\alpha$ and $p$ is assumed to be given in the experiment, the only hyperparameter specifically included in our algorithm is the parameter $c$, which represents the requirement by human. If $c$ is too small, the target set is easily achieved, causing MOPO to lose the motivation to change direction. (In this case, the induced LLM is enough for achieving the human's requirement.) On the other hand, if $c$ is too large, the target set becomes distant, and the projection direction will tend to balance $[1/m,1/m,\cdots,1/m]$, which is also not conducive to maximizing the weighted p-norm expected utility. Hence, a value that is slightly larger than the expected utility could be suitable for $c$, for maximizing the expected utility function. In practice, it can be selected in different ways: (a) Since it represents a requirement set by the human, it can be provided directly by the human. Note that in this case, if LLM already satisfied the requirement, it will have no motivation to further enhance the expected utility. (b) If the goal is to maximize the expected utility function $u(\pi)=\sum_{i=1}^m(\mathbb{E}\_{x\sim \rho, y\sim \pi}[r_i(x,y)])^p,$ $c$ can be adaptively chosen to be slightly larger than the current expected utility, such as $c = u(\pi^t) + 0.01$ after getting the policy $\pi^t$ at each round $t$. Then, the model will tend to maximize the expected utility by approaching the target set. In experiments in our original paper, we manually choose the parameter $c$ that is slightly larger than the previously calculated expected utility. We also provide additional experiments using the adaptive approach (b) to validate our statements. See the additional experiment for details. For other hyperparameters like $\beta$ and $\eta$, these are standard hyperparameters used in other works ($\beta$ in [1], $\eta$ in [4]). We can just use the standard configuration. >2. The lack of analytical experiments on multiple LLMs may limit the universality of the findings. We conduct experiments on LLAMA2-7B, which is a commonly used model. We use three objectives to evaluate our algorithm since they are classical metrics that are used in previous works [1,2,3]. Since the primary contribution of this paper is theoretical, we provide preliminary experiments on popular benchmarks. We also perform experiment results on the Summarize-from-Feedback dataset. We fine-tune a different LLM ALPACA-7B for objectives 'Summary' and 'Faithful' and get two policies $\pi_1, \pi_2$. Then, we perform experiments on these two policies. We use $\alpha=(1/2,1/2)$ and $p=0.5$ for the aggregation, and $c=u(\pi^t) + 0.01$ for round $t$. The results are shown in the following table, which shows that our MOPO performs better than RS and MOD, with adaptively selected parameter $c$. | $\alpha$ | MOPO | RS | MOD | |-----------|-------|-------|-------| | (0.3,0.7) | 0.521 | 0.515 | 0.507 | | (0.5,0.5) | 0.522 | 0.517 | 0.502 | | (0.7,0.3) | 0.549 | 0.552 | 0.530 | >3. Computational efficiency compared to non-linear aggregation baseline. The nonlinear aggregation baseline in our paper is Aggregated Reward (AR), which means that we directly aggregate the reward to $r(x,y) = (\sum_{i=1}^n \alpha_i r_i^p(x,y))^{1/p}$, and training the reward directly using PPO. It cannot be directly applied to the negative reward setting because of p-norm aggregation. To address this, we transform the rewards into non-negative values by applying $r_i(x,y) = \max\\{r_i(x,y),0\\}$. For computational efficiency, AR needs retraining each time we change the aggregation method or the weights, which takes more than 7 hours in our experiment and 30 minutes evaluation times per different weight. In contrast, our training-free algorithm does not require retraining. Instead, we use pre-calculated optimal policies for each objective, which only requires less than 30 minutes for evaluation to get the expected reward for each weight. We will release our code later in our revised version. Thanks again for your time and effort! If you have more questions, we are happy to solve them. [1]. Shi et al. 2024: Decoding-Time Language Model Alignment with Multiple Objectives. [2]. Zhou et al. 2024: Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization. [3]. Yang et al. 2024: Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment. [4]. Cen et al. 2024: Value-Incentivized Preference Optimization: A Unified Approach to Online and Offline RLHF. --- Rebuttal Comment 1.1: Comment: Thanks for the authors’ response. I have no further questions.
Summary: The paper proposes a general framework for multi-objective multi-group RLHF. The authors creatively draw inspiration from RL with Blackwell-approachability to resolve handle the non-linear structure of the aggregated reward. Claims And Evidence: The authors provided theoretical and empirical guarantees for the proposed offline and online algorithms. Empirical evidence on anthropic HH supports these claims, with metrics showing performance gains over simpler baselines. Methods And Evaluation Criteria: The proposed method is a good addition to the literature. I have some slight concerns regarding the theoretical and empirical analysis, discussed in the two sections below. Theoretical Claims: - The methods developed are built on the assumption that the reward is non-negative and bounded. However, the Bradley–Terry–derived reward may not be strictly in [0,B]. If negative values occur, the paper should discuss how p-norm aggregation (and potentially other proposed methods) might be affected or require rescaling. - The minimax formulation in Equations (6) and (7) is a central theoretical contribution. However, solving it might be computationally challenging; the paper would benefit from a more concrete explanation of how one can efficiently compute or approximate the solution in large-scale settings. Experimental Designs Or Analyses: While the results on anthropic HH illustrate the method’s viability, this dataset is somewhat outdated. Testing on newer datasets, such as HelpSteer2 with higher-quality multi-objective labels and more categories, would better demonstrate robustness and broad applicability. Supplementary Material: Yes Relation To Broader Scientific Literature: The paper extends the literature of multi-objective and multi-group RLHF and provides novel algorithms. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Please see above Other Comments Or Suggestions: Please see above Questions For Authors: Please see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your positive response and meaningful review! We will address your question as follows. >1. If negative values occur, the paper should discuss how p-norm aggregation applies. Both in theory and practice, our algorithm can handle both the negative reward and the positive reward. In fact, by shifting the objective from maximizing the aggregated expected reward to minimizing the distance between the target set and the expected reward vector, we can effectively handle situations where the expected reward is negative (as the expected reward vector can be negative while the target set remains in the positive range). In this case, the goal is to make the estimated reward vector (which may be negative) closely align with the target set. For the theoretical part, note that the p-norm aggregation appears in the definition of the target set, and the expected reward vector $S(\pi)$ can be negative. In practice, MOPO performs well in experiments where the reward is negative, whereas truncating rewards to ensure positivity and directly aggregating them leads to failure (AR in our table). Moreover, the boundedness of the reward function is a common assumption and holds for most reward models. >2. The minimax formulation in Equations (6) and (7) is a central theoretical contribution. However, solving it might be computationally challenging; the paper would benefit from a more concrete explanation of how one can efficiently compute or approximate the solution in large-scale settings. This is an important question. We have two approaches to make it practical. The first way is to use a reward-free DPO algorithm, as in Eq.(8) in our paper. We can further simplify the Eq.(8) by replace $\log \pi_\theta(y\mid x)$ to $\alpha_i \log \pi_{\theta_i}(y\mid x)$, and Eq.(8) becomes decentralized optimization, where $\\theta_i = \arg\min_{\theta}\beta \alpha_i\mathbb{E}\_{\pi_{base}} [\log \pi_{\theta}(y\mid x) - \eta \ell(D_i, \theta_i)]$ for each objective $i$, and we can perform VPO [1] for estimating $\theta_i$. Second, one can ignore the first exploration term, and Eq.(6) and Eq.(7) become traditional MLE for estimating the reward model. Also, one can use a pre-existing reward model if it is available. In our practical MOPO algorithm, we use a pre-existing reward model and use PPO to get the optimal policy $\pi_{\theta_i}$ for each objective. Then, Line 2 and Line 3 in Algorithm 1 can be implemented by previous linear MORLHF algorithms like MOD and RS. We will discuss it in detail in our next version. >3. While the results on anthropic HH illustrate the method’s viability, this dataset is somewhat outdated. Testing on newer datasets, such as HelpSteer2, with higher-quality multi-objective labels and more categories would better demonstrate robustness and broad applicability. We also perform experiment results on the Summarize-from-Feedback dataset [3]. We fine-tune a different LLM ALPACA-7B for objectives 'Summary' and 'Faithful' and get two policies $\pi_1, \pi_2$. Then, we perform experiments on these two policies. We use $\alpha=(1/2,1/2)$ and $p=0.5$ for the aggregation and choose $c$ adaptively by $c=u(\pi^t) + 0.01$ where $u(\pi^t)$ is the expected utility for $\pi^t$ at round $t$ (See Response for Reviewer MMQx for the selection of $c$ and Response for reviewer uhy5 for the definition of $u(\pi)$). The following table shows the aggregated expected utility, which shows that our MOPO performs better than RS and MOD. Note that we do not select the HelpSteer dataset since the variance of its reward model is too small (see Appendix F.4 in [2]), making it less suitable for demonstrating the differences between linear and non-linear aggregation. | $\alpha$ | MOPO | RS | MOD | |-----------|-------|-------|-------| | (0.3,0.7) | 0.521 | 0.515 | 0.507 | | (0.5,0.5) | 0.522 | 0.517 | 0.502 | | (0.7,0.3) | 0.549 | 0.552 | 0.530 | Thanks again for your time and effort! We will be happy to answer any further questions you may have. [1]. Cen et al. 2024: Value-Incentivized Preference Optimization: A Unified Approach to Online and Offline RLHF. [2]. Shi et al. 2024: Decoding-Time Language Model Alignment with Multiple Objectives. [3]. Stiennon et al. 2024: Learning to summarize from human feedback. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I'll keep my score here.
null
null
null
null
null
null
PaperBench: Evaluating AI’s Ability to Replicate AI Research
Accept (poster)
Summary: The authors have created a benchmark called PaperBench that tests model ability to reproduce modern machine learning research papers. The dataset consists of 18 papers (and a small dev set). To track granular progress on result reproduction, the authors create a hierarchical rubric for each paper. Judging outputs is done with an LLM, and the authors evaluate how well this performs compared to human judgements (pretty well). The authors evaluate 4 LLMs on the benchmark and find that their performance lags behind that of humans. Claims And Evidence: The authors are proposing a benchmark, not really testing claims. Claims regarding human and model performance seem reasonable. Methods And Evaluation Criteria: Yes Theoretical Claims: No proofs in this paper Experimental Designs Or Analyses: No Supplementary Material: No Relation To Broader Scientific Literature: The authors discuss this thoroughly in the Related Work section and I know of no connections they don't identify Essential References Not Discussed: I don't know of any Other Strengths And Weaknesses: * Other strengths * The greatest strength of this paper is that it takes a first stab at an interesting and hard problem in a reasonable way * The scoring tree idea is intuitive and reasonable * Well-written * Other weaknesses * I'm a bit concerned about the longevity/usefulness of the benchmark * 13/18 rubrics still under review * More repos could be added that wouldn't be in blacklist * Papers and code repos could end up in pre-training data (this identified by the authors) * Seems like counting "overnight"-type computer running for humans puts them at a relative disadvantage because models could let something run while also working on a different part of the project * In general it would be nice to see more qualitative results/discussion - e.g., When did models finish (or give up at) time-wise? Which types of nodes did models do better on (based on the classification in paper)? * The paper has some points I'm unclear on * Why does the human line ultimately go down in Figure 4? * Why were LLMs stopped at 12 hours? It seems like progress was still being made at that point? * Why can't LLMs just upload results to their repo? I see you only consider executed submission, but it seems like from the paper that this includes the non-executed submission. * It would be nice to see further discussion of some points * Why are scores on some papers higher than others (Figure 3)? Is it something about the papers? About the scoring tree? Other Comments Or Suggestions: * One line 328 I think 5.3 is a typo. The link doesn't go anywhere. Questions For Authors: Using this as a tl;dr spot for my evaluation. Recommending "accept". See reasons for and against (including some questions) in "Other Strengths and Weaknesses". This benchmark has some substantial limitations (also discussed by the authors), but I think it provides a useful starting point for measuring LLM abilities on this hard and interesting problem. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful feedback! We address your comments below: 1. Concerns About Longevity/Usefulness of Benchmark Thank you for raising this important point! We are happy to announce that we now have author approval on 20 rubrics and so now the dataset and results are on a 20 paper dataset. We agree that there are contamination risks both as more potentially helpful resources become available online to agents that can browse the web (that aren’t included in the blacklists) and as papers and code repos get included in pre-training data. Building better monitors can help detect occurrences of the former, but won’t be that helpful for pre-training contamination. We hope that in future the PaperBench dataset can be expanded to include new papers that can help the benchmark stay relatively uncontaminated. 2 + 4a + 4b. Human vs. LLM Time Accounting ("Overnight computer runs") We agree with your critique. We think there are a few factors that mean directly comparing human vs AI agent time can be confusing: for example, AI agent time can be influenced by API inference speed which is variable. Since the version of the paper that was initially reviewed, we have significantly expanded our human baseline experiments in our new manuscript. The updated version shows considerably improved human replication scores (now up to 41.4% from 22.0% previously) via extending the baselines from 1 week to 4 weeks which we believe is a more realistic assumption for a human attempting the task. These new human baselines demonstrate that the PaperBench replications, while challenging, are indeed attainable for skilled humans and the plots suggest that scores would continue to improve if humans were given more time. We discuss this in a revised Section 5.4. In these results (on a 3 paper subset) we find that our tested model, o1, plateaus after around the first hour which justifies our decision to impose a timeout of 12 hours. On these new human baseline results the human line continues to go up and doesn’t show signs of plateauing. 3. We have now included a table in the appendix which stratifies scores based on requirement type. We observe that models perform poorly on Code Execution and Result Analysis requirement types, while scoring better at Code Development nodes. 4c. Why can't LLMs just upload results to their repo? Thank you for raising this important criticism! In light of this we have changed the way we grade Result Analysis nodes. Now all Result Analysis nodes are judged only by looking at any files created or modified in the reproduction step (as well as docs, reproduce.sh, and reproduce.log). By forcing the judge to only look at newly modified files this better guards against the judge being fooled by results that are included in the preliminary submission (but this could be hacked around by an adversarial AI agent that wants to fool the judge). 5a. Why are scores on some papers higher than others? Some paper scores are higher than others due to the inherent differences in difficulty between replicating different papers. By having papers of different difficulty in the dataset, we prevent the benchmark from saturating quickly and get a better estimate of agent capabilities since we’re tracking the agent’s performance on a greater variety of tasks. The main benchmark metric is average Replication Score which reports the mean Replication Score across each paper. --- Rebuttal Comment 1.1: Comment: Thanks for the response, and I'm happy to hear the paper has seen continued development.
Summary: This paper contributes PaperBench, a new benchmark that replicates the code implementations of top-tier AI conference research papers, including their code, and the papers' results from running the generated code, including their analysis. It uses 18 spotlight and Oral papers. It leverages LLMs to automate grading of the benchmark and proposes an LLM as judge and specialized dataset including addendums to evaluate this benchmark. Claims And Evidence: * "We open-source our code". Code is not open sourced, or available to reviewers for review. I encourage the authors to fulfill this claim to allow others to use this benchmark. * "PaperBench: a dataset of 18 machine learning papers, each accompanied by a human-curated and author-approved rubric". The contribution of this paper is a new benchmark PaperBench; however, given the code is not contributed with the paper, the rubrics are not given either in the paper therefore, this claim is untrue as it stands. I encourage the authors to release their code, or provide the full rubrics within the appendix of the paper to allow the Benchmark to be used and implemented by others. * "JudgeEval: an auxiliary evaluation that evaluates how accurately a given judge grades replication attempts." No novelty, as using LLM as a judge has been done before (Zheng et al. 2023). * "measuring AI agents’ abilities to conduct long-horizon tasks and engage in machine learning research and development.". This claim should be tightened, as it is too broad, as although PaperBench measures how well an LLM with agent scaffolding can replicate the codebase to replicate a paper, it is not actually doing ML research and development, as the paper is given to the LLM when replicating the codebase, which contains the method and any other research findings. If the authors truly wish to create a benchmark for ML research evaluation, this would be similar to AI Scientist (Lu et al. 2024). References: * Zheng, Lianmin, et al. "Judging llm-as-a-judge with mt-bench and chatbot arena." Advances in Neural Information Processing Systems 36 (2023): 46595-46623. * Lu, Chris, et al. "The ai scientist: Towards fully automated open-ended scientific discovery." arXiv preprint arXiv:2408.06292 (2024). Methods And Evaluation Criteria: * The methods and evaluation criteria make sense for a new benchmark contribution; however, the evaluation aspects of the benchmark and its construction can all be better motivated and improved. * Paper lacks motivation why the tree rubric structure was chosen over a simple checklist structure, was this used in other papers, perhaps? Also how are the weights determined in the rubrics, do they have a particular meaning, should the weights follow a specific structure? Why are the weighted nodes needed over uniform nodes? * Judge LLM Agent only reads top-k code files, where k=10. Why was 10 selected, can you motivate this choice through an ablation of different k's. Or are alternative approaches superior such as scanning through multiple files with that rubric until you find files that satisfy it? Why not avoid this, and use a larger context window size LLM? Such as LLMs that can process 2 million tokens. * How many random seeds, or independent runs were used? To calculate the error bars in the table; this was not clear from the paper. Theoretical Claims: * Not applicable, no theoretical claims are made Experimental Designs Or Analyses: The proposed Benchmark was benchmarked correctly to it's outlined implementation. Supplementary Material: Skimmed parts of the supplementary. Relation To Broader Scientific Literature: Key proposed contribution of this paper is a new ML paper codebase replication benchmark. The idea of using LLMs to replicate results and code from a paper is not novel, see ("Replicating a High-Impact Scientific Publication Using Systems of Large Language Models.", Bersenev et al. 2024), however this specific ML related replication benchmark is novel. However the authors should cite this paper, and any others that already perform replication of papers. References: * Bersenev, Dennis, Ayako Yachie-Kinoshita, and Sucheendra K. Palaniappan. "Replicating a High-Impact Scientific Publication Using Systems of Large Language Models." bioRxiv (2024): 2024-04. Essential References Not Discussed: Most relevant related work is not discussed ("Replicating a High-Impact Scientific Publication Using Systems of Large Language Models.", Bersenev et al. 2024), and clear delineation on novelty could be beneficial to readers. Additional citations that help the reader would be citing existing work in the area of using an LLM as a judge, e.g., (Zheng et al. 2023), and differentiating and refining the core claim to do scientific research which this is not, as scientific research papers include (Lu et al. 2024). * Bersenev, Dennis, Ayako Yachie-Kinoshita, and Sucheendra K. Palaniappan. "Replicating a High-Impact Scientific Publication Using Systems of Large Language Models." bioRxiv (2024): 2024-04. * Zheng, Lianmin, et al. "Judging llm-as-a-judge with mt-bench and chatbot arena." Advances in Neural Information Processing Systems 36 (2023): 46595-46623. * Lu, Chris, et al. "The ai scientist: Towards fully automated open-ended scientific discovery." arXiv preprint arXiv:2408.06292 (2024). Other Strengths And Weaknesses: * Clarity: Paper is well written, and clear to follow. * Originality: Paper replication benchmark is novel, however it lacks proper motivation on why certain elements of the benchmark were chosen or references to existing benchmarks constructed, see above comments. * Significance: Having a benchmark to replicate papers is valuable for the community, however it could be made more valuable by also thinking how this approach could be made fully scalable, and if the rubrics and any addendums can be self generated for new papers, to improve on the problem that the benchmark contains only a few papers (18). Additionally, it would be helpful for the community if the authors could analyze the failures cases and provide insight into why the scores are so low, what tasks on average the LLM agent gets right, and what are the current open challenges that it struggles with? If agent scaffolding is the big problem as discussed in the paper, can the authors investigate ablations with different agent scaffolding techniques (Huang et al. 2024)? References: * Huang, Xu, et al. "Understanding the planning of LLM agents: A survey." arXiv preprint arXiv:2402.02716 (2024). Other Comments Or Suggestions: * Why is the human replication rate so low? If so low, how can we expect LLM Agents to solve it? * Can you inspect which websites it visited in the logs? How do you ensure no leakage by visiting blogs discussing the paper or even other code bases that can have partial or re-implementations of it? * Assumption of un-restricted compute, seems difficult for others to use as a benchmark. * Can any consistency be imposed across the rubrics? * Can future versions also incorporate multi-modal LLMs to analyze plots too? * What about using the benchmark with small context LLMs? We could benefit from a discussion on the suitability and requirements for an LLM to be used in the benchmark. * It seems the scores should be normalized per paper, rather than absolute counting. * Can you run a sanity check where you use your judge on the actual repo produced (plus any addendums) by the paper to check if the replication score is 100%? If not then the heuristics are invalid. * "AISI’s Basic Agent scaffolding" is not described in the paper, or cited, however there is a link, paper would benefit from an explanation of this scaffold to be self contained. * No guarantee or test that the addendum is fully correct; as it was created by when creating the rubrcics. Need a better sanity check to ensure that it is correct. * Ablation providing the agent with the Judge addendum; does it improve the score? If so, does this indicate poor retrieving and searching for relevant papers that help with implementing the algorithm/paper that were accessible when the paper came out, that the original authors may have had access to those papers. * Explain relative performance amongst LLMs, and any insights there, could be helpful for the reader. Questions For Authors: 1. Analysis into the failures? Why is it so low? Do these papers rely on specific external libraries that the LLM doesn’t know about, such as transformers library etc; a particular dated version of a dependency, numpy, Jax, Tensorflow or PyTorch? 2. Analysis of successes: which three different types do we see succeeding and failing, and do any insights here? 3. Can you artificially scale up the process to create these heuristics from papers? If so how? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to carefully review our paper! The rebuttal is subject to a character limit; we address what we saw as the highest-priority comments below: > Code is not open sourced, or available to reviewers for review. We will open source our codebase for the camera-ready release. The codebase can be found under the supplementary materials, which has been available during the review process. > No novelty, as using LLM as a judge has been done before JudgeEval is a necessary auxiliary evaluation in order for us to justify using an LLM-based judge for the grading of paper replication attempts. We’d like to stress that we’re not claiming the use of a judge is a novel idea. > This claim should be tightened, as it is too broad… We now use updated wording in the Abstract and Introduction which clearly states the intended scope of PaperBench: we claim that it is “a benchmark evaluating the ability of AI agents to _replicate_ state-of-the-art AI research”. > …the evaluation aspects of the benchmark and its construction can all be better motivated and improved. 2a. Why use a tree rubric structure, and how weights are determined: The tree structure allows us to measure partial progress towards a result in a paper (allowing us to determine whether agents are scoring highly on particular sub-trees) and hierarchically decompose the grading task into smaller (and therefore easier) tasks for the Judge. Rubric weights are carefully set by each rubric creator and are author-reviewed to reflect the relative importance of each sub-task. 2b. Judge LLM Agent only reads top-k code files, where k=10: Our judge achieved an F1 score of 0.83 on JudgeEval using $k=10$. Preliminary experiments showed larger values of $k$ yielded similar performance, while smaller values lowered the F1 score. Future work exploring alternative judge designs (e.g., longer-context or vision-capable models) would be valuable. 2c. Clarifying Number of Seeds and Runs ("How many random seeds?") In Section 5.2 of our new manuscript, we clarify that we performed evaluations with three independent runs for each evaluated agent to robustly estimate performance and error bars. > However the authors should cite this paper... Thanks for pointing us towards Bersenev et al.’s work; we will reference it in related work. > Significance: …if the rubrics and any addendums can be self generated for new paper… We’ve reached the character limit for this rebuttal, so please see our detailed response to Reviewer b2MF’s comment: “Not scalable: each paper requires…” > Why is the human replication rate so low? If so low, how can we expect LLM Agents to solve it? We’ve since improved the human baseline experiment, extending the best performers after week one to continue their attempt for a total of four weeks, a more realistic timeframe for humans. This led to an increase in the average replication score from 22.0% to 41.4%. In our updated Figure 3, we see a steady increase in the human baseline score over time, which we expect to continue given more time. > Assumption of un-restricted compute, seems difficult for others to use as a benchmark. We agree that running PaperBench with our setup is too resource intensive for many users. We therefore introduce a variant in our updated manuscript, PaperBench Code-Dev, where models are only evaluated on Code Development nodes. Users therefore don’t need a GPU or an expensive VM, which cuts costs substantially. > Can any consistency be imposed across the rubrics? We had multiple processes in place to ensure rubrics are created in a consistent manner. We adhered to team-wide guidelines for rubric creation. We also had a core team of reviewers who reviewed every rubric in the dataset, ensuring consistency. Additionally, the author of each paper reviewed the relevant rubric for correctness, often resulting in many iterations of refinement. > Can you run a sanity check where you use your judge on the actual repo produced… In JudgeEval, we manually graded original author repositories and found replication scores below 100%. As explained in the paper, this is expected since original codebases often have bugs, are incomplete, or lack the required reproduce.sh scripts. > No guarantee or test that the addendum is fully correct… Each addendum was co-developed with and reviewed by the original authors of each paper – often going through multiple rounds of iterations of refinement – which ensured they were high-quality and accurate. > Explain relative performance amongst LLMs… Thanks for the suggestion! We’ve updated the paper to briefly analyze why models perform differently from one another. Most models frequently finished early or faced a problem they couldn’t solve. All agents failed to verbalize how best to use the limited amount of time available to them. — Thank you for your valuable feedback! Please consider raising your rating if you feel that the paper has improved.
Summary: The paper introduces PaperBench, a benchmark for evaluating AI agents’ ability to replicate SOTA ML research. The dataset comprises 18 papers from ICML 2024. Each paper is accompanied by a manually curated rubric, which hierarchically decompose each replication task into smaller gradable subtasks. The an LLM-based judge to grade replication attempts. Experiments on proprietary LLMs show that the best-performing LLM -- Claude Sonnet 3.5 -- achieves a replication score of 14.1%, below the 22.0% achieved by human PhD participants on a five-paper subset. The study highlights the challenges AI faces in long-horizon ML research tasks and suggests that AI models can perform some research tasks but still fall short of human-level capabilities. Claims And Evidence: Overall, the claims made in this submission are convincingly supported by the presented evidence. However, the intermediate nature of the results (13 out of 18 rubrics under review by authors) somewhat limits the certainty of final conclusions. Additionally, the limited number of human annotators and AI model evaluations constrain the robustness and generalizability of the benchmark conclusions. Methods And Evaluation Criteria: Yes Theoretical Claims: The paper does not explicitly contain theoretical claims or proofs. Experimental Designs Or Analyses: Yes. Including comparing models under controlled conditions and comparing proprietary LLMs such as o1 and Claude 3.5 Sonnet. Supplementary Material: Yes, details on rubric creation, evaluation instructions, and judge implementation were reviewed. Relation To Broader Scientific Literature: The paper clearly situates its contribution within related benchmarks such as CORE-Bench, MLE-bench, MLAgentBench, and RE-Bench, emphasizing that PaperBench uniquely assesses AI’s ability to replicate current state-of-the-art research papers autonomously and from scratch. Essential References Not Discussed: Line 47 mentions the UK AI Safety Institute’s Basic Agent scaffolding without an explicit citation, which should be included to clarify the source and structure of the agent scaffolding. Other Strengths And Weaknesses: Strengths: 1. Propose a hierarchical rubrics that enable precise measurement of partial replication progress. 2. Transparent evaluation and clearly documented processes, including the introduction of JudgeEval. 3. Evaluate many state-of-the-art models on PaperBench, measuring AI agents’ abilities to conduct long-horizon tasks and engage in machine learning research and development. Weaknesses: 1. Very limited annotations and data: a) only 18 papers, and b) only 8 submissions from PhD students to establish human baseline performance. This restricts the benchmark's representativeness and reliability. 2. Not scalable: each paper requires a manually created rubric and associated weights, raising concerns about scalability due to the intensive manual effort involved. 3. Very expensive for subsequent researchers to utilize the benchmark: grading a single submission costs 150 USD, and evaluating 18 submissions would cost $2,700 for a single model or method. This high cost is prohibitive for others aiming to benchmark their models, thus limiting the benchmark's broader impact. 4. Lack of experiments involving prominent open-source models such as DeepSeek-R1 or LLaMA, restricting the generalizability of the findings. Other Comments Or Suggestions: Line 47: Lack of citation - UK AISI’s Basic Agent scaffolding Questions For Authors: 1. You mention that 13 rubrics are currently still under review by paper authors, making your presented results an intermediate evaluation. Could you clarify how significantly you expect the final results to differ from the current intermediate evaluation? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your constructive feedback! We address your comments below: > Line 47 mentions the UK AI Safety Institute’s Basic Agent scaffolding without an explicit citation, which should be included to clarify the source and structure of the agent scaffolding. Thank you for catching this! We’ve added a citation for UK AISI’s Basic Agent scaffolding and have explicitly referenced it throughout the paper. > Very limited annotations and data: a) only 18 papers, and b) only 8 submissions from PhD students to establish human baseline performance.This restricts the benchmark's representativeness and reliability. Thank you for the feedback! We’ve updated PaperBench to include 20 papers (up from 18), all of which have been finalized with author approvals. The rubric now has 8,316 individually gradable tasks (up from 6,125). We’d like to mention that we think the number of individually gradable tasks gives a more accurate picture of how diverse the tasks in PaperBench are, rather than the number of papers. We’ve also improved the human baseline experiment, extending the best performers after week one to continue their attempt for a total of four weeks, since we think this is a more reasonable timeframe for humans to replicate a paper. Considering the turnaround time and cost of running such a human baseline experiment, it was infeasible to conduct it with a larger sample size. > Not scalable: each paper requires a manually created rubric and associated weights, raising concerns about scalability due to the intensive manual effort involved. We agree that each rubric involves a significant amount of effort to create. We found that current models aren’t capable of end-to-end rubric creation (even after extensive prompt iteration) and, even when rubrics were generated end-to-end, they took a significant amount of time to review. Models were, however, useful in assisting humans in writing rubrics. We have released our rubric creation web application in the supplementary materials, which uses models to assist humans in the rubric creation process; we used this tool throughout our rubric-creation process. We’ve also added Appendix A.1 which outlines future directions of automated rubric creation, which may be possible in the near future. > Very expensive for subsequent researchers to utilize the benchmark: grading a single submission costs 150USD and evaluating 18 submissions would cost 2,700 for a single model or method.This high cost is prohibitive for others aiming to benchmark their models, thus limiting the benchmark's broader impact Thank you for raising this point! We’ve significantly reduced the cost of grading to approximately 66 USD per paper (down from 150 USD) by using a cheaper model (o3-mini-high) for the Judge. Importantly, using o3-mini-high achieves comparable scores to the previous model on JudgeEval. In general, we expect the cost of grading to fall sharply as cheaper and more capable models become available in the future, meaning this barrier to entry will be short-lived. We also release a lighter-weight variant, PaperBench Code-Dev (which supersedes PaperBench Lite and is even more accessible and affordable), reducing the grading cost further from 66 USD per paper to approximately 10 USD per paper when using our optimized o3-mini based judge. We describe this new lightweight variant in detail in Section 2.6, but in short: it’s a lighter-weight variant of PaperBench designed for situations where you only want to see if the agent can write plausibly-correct code, without the overhead of running it. Grading costs are significantly reduced, since the judge grades fewer nodes, and doesn’t need to place the reproduce.log file in its context (which can be lengthy). We’ve also created an experimental grading feature whereby we grade rubrics which are “pruned” to a given depth, potentially reducing the cost of grading by 10x or more. We’ve added a new section, Appendix H, which covers this approach in more detail, including experimental results on JudgeEval. > Lack of experiments involving prominent open-source models such as DeepSeek-R1 or LLaMA, restricting the generalizability of the findings. Thank you for the suggestion! We’ve updated the paper to include a leading open-source model, DeepSeek-R1. We find that DeepSeek-R1 performs better than some closed-source frontier models (e.g., GPT-4o) but worse than others (e.g., Claude Sonnet 3.5). > You mention that 13 rubrics are currently still under review by paper authors, making your presented results an intermediate evaluation. Could you clarify how significantly you expect the final results to differ from the current intermediate evaluation? We've now completed all author reviews of rubrics, have re-run our experiments and found that our results are broadly consistent with the preliminary results presented originally. — Thank you for your valuable feedback! Please consider raising your review score if you feel that the paper has improved. --- Rebuttal Comment 1.1: Comment: Although I thank the authors for their response, unfortunately, my concerns about the extremely prohibitive evaluation cost, limited paper coverage, and scalability still remain. I would like to keep my current score. --- Reply to Comment 1.1.1: Comment: Thank you for your follow-up response. We respectfully disagree with the characterization of PaperBench as having an “extremely prohibitive” evaluation cost. We understand that PaperBench, like other agent-based benchmarks, has higher costs compared to traditional QA-style evaluations due to the longer task horizons and corresponding token usage required. However, these costs remain within the typical range for comparable agent benchmarks widely adopted in the research community. For instance, the creators of the Aide coding agent estimated evaluation costs of approximately 10k USD for SWE-bench Verified [1], and developers of the OpenHands agent reported approximately 6k USD for running the full SWE-bench [2]. In contrast, the lightweight PaperBench Code-Dev variant we introduced can similarly be run within budgets in the low single-digit thousands of USD, making it comparably accessible for typical academic research groups. While we recognize these costs might still limit certain researchers, we believe they are reasonably accessible and not unusually prohibitive when contextualized within the broader agent benchmarking space. We kindly ask the reviewer to clarify if there is a specific reason PaperBench’s evaluation costs are viewed as significantly more prohibitive compared to benchmarks like SWE-bench, which have similar or higher associated costs. Regarding the concern about limited paper coverage, we have previously noted that the most meaningful measure of PaperBench’s scope is the total number of individually gradable tasks (8,316), rather than just the paper count itself. The reviewer originally stated that the size of our dataset “restricts the benchmark's representativeness and reliability”; we invite the reviewer to elaborate on why our dataset is limited in its representativeness and reliability, despite having a large number of individual tasks across a range of sub-tasks for each paper. We sincerely appreciate your constructive feedback. [1] https://news.ycombinator.com/item?id=42638605#:~:text=Hey%21%20One%20of%20the%20creators,of%20Aide%20here [2] https://github.com/All-Hands-AI/OpenHands/issues/1693#issuecomment-2105057205
null
null
null
null
null
null
null
null
What is Adversarial Training for Diffusion Models?
Reject
Summary: This paper investigates AT tailored specifically for DMs, emphasizing that adversarial robustness for DMs should enforce equivariance rather than invariance. The authors introduce a new approach where perturbations, either random or adversarial, are added to enforce smoothness in the diffusion trajectories. Empirical evaluations on synthetic and real datasets (such as CIFAR-10) demonstrate that their method effectively enhances robustness, reduces memorization, and improves the resilience of diffusion models under various types of noise and adversarial attacks. Claims And Evidence: The current experimental evaluation appears limited, primarily relying on synthetic and relatively small-scale datasets, thus leaving questions regarding the generalizability and scalability of the proposed approach largely unanswered. To strengthen the claims, it would be beneficial to evaluate the method on larger-scale datasets or real-world noisy datasets, thereby providing stronger and more convincing evidence of practical robustness and broader applicability. Methods And Evaluation Criteria: The authors’ approach of defining AT for diffusion models through equivariance is conceptually sound and makes sense given the generative nature of these models. Theoretical Claims: The formulations appear generally sound, and I did not notice significant issues. Experimental Designs Or Analyses: The experimental setup and analyses appear generally sound, appropriate, and sufficiently clear. Supplementary Material: I have reviewed the supplementary material, with particular attention to the detailed experimental results and explanations presented in the appendix. Relation To Broader Scientific Literature: The authors have clearly discussed relevant work specific to diffusion models; however, it would be beneficial if they could additionally consider related adversarial robustness techniques from a broader perspective. Specifically, several adversarial defense methods have previously leveraged denoising strategies to improve robustness, even though not limited to diffusion models. Including or comparing against these methods might further strengthen the paper by placing it in a wider context of adversarial robustness literature. Essential References Not Discussed: The authors have discussed relevant references clearly, but as mentioned earlier, it would be beneficial to also include references to general adversarial defense techniques that utilize denoising strategies, even if they are not specifically designed for diffusion models. For example, works focusing on adversarial purification or denoising-based robustness methods (such as those based on randomized smoothing or explicit denoising modules) could be relevant for broader context and are currently not adequately discussed. Other Strengths And Weaknesses: Overall, the paper is clearly structured and easy to follow. The conceptual logic is presented clearly, and the theoretical analysis provided is insightful and thorough. The main limitation lies in the experimental evaluation, which remains somewhat narrow in scope and dataset complexity. Moreover, comparisons or discussions with other adversarial robustness methods that utilize denoising or smoothing strategies are currently insufficient, and addressing this gap would significantly strengthen the manuscript. Other Comments Or Suggestions: N/A Questions For Authors: Can you elaborate on how the proposed method compares to or differs from existing adversarial defense methods that also employ denoising or randomized smoothing strategies? Including a discussion or comparison could clarify the uniqueness and significance of your approach. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and constructive criticism. We are glad the reviewer mentioned that our method and analysis are sound, appropriate, and sufficiently clear, giving comments such as: the paper is clearly structured and easy to follow; that the conceptual logic is presented clearly, and the theoretical analysis provided is insightful and thorough. We hope to answer the final remarks: **1. Generalization to large scale dataset - rev `a6SF`; Effectiveness on more complex datasets - rev `xC6d`** We show our method is still effective on more complex datasets, covering a wide range of cases: more samples, more classes, higher resolution. We will add them to the revised version. After corrupting the data as described in L.262-267-left, we provide results on: - **CelebA** that consists of 202K face images of 64x64 pixels (x4 CIFAR-10 cardinality, x2 resolution), being it a benchmark dataset for image generation task; - **TinyImageNet** 100K samples at 64x64 resolution with 200 classes (x20 more classes than CIFAR-10). TinyImageNet is not a benchmark for generation, we tested the faster DDIM only at $\sigma=0.2$ due to the time limit. | Method $\downarrow$ | CelebA | CelebA |--|--|--| | Noise $\rightarrow$ | $\sigma=0.1$ | $\sigma=0.2$ | DDPM | 56.72 | 96.08 | Robust$_{\text{adv}}$ - DDPM | **14.4** | **16.4** | Method $\downarrow$ | TinyImageNet |--|--| | Noise $\rightarrow$ | $\sigma=0.2$ | | DDIM | 66.2 | | Robust$_{\text{adv}}$ - DDIM | **48.9** | We provide qualitative results of these experiments, showing our generation vs the baselines: - [Qualitative comparison on CelebA](https://bashify.io/i/CPm0D6) - [Qualitative comparison on TinyImageNet](https://bashify.io/i/bpcTIN ) **The same outcome we have in the paper transfers to these two datasets**: For both of these new datasets, we observe the same pattern as before. Our approach induces smoothness in the data distribution thereby removing the noise in case it is present. Despite the Gaussian noise increasing, our method still keeps the FID lower than the baseline. This is achieved by trading robustness for variability: it still generates images that are natural yet more “smooth” with less cluttered background or it makes abnormal samples more “normal”. For instance, in CelebA, Robust DMs avoid generating faces that resemble outliers: faces where image quality is so low that the data does not look like a canonical face anymore. The same applies to TinyImageNet yet the baseline is weaker since we used the same network as in CIFAR with a larger input size and no hyperparameter tuning due to time constraints. Still the model trades off learning variability for improved smoothness. Indeed in the qualitative images above we see that, despite Robust DMs generating smoother faces, they also remove the added Gaussian noise. **2. Discussions of existing adversarial defenses with denoising or randomized smoothing** In response to rev. `riys` and rev. `a6SF`, we will add a mini-section at the end of Section 4 discussing adversarial defense techniques, including randomized smoothing–e.g., Cohen et al. (2019)–and denoised smoothing [C,D]. For rev. `a6SF`, please see **point 1** in the response to rev. `riys`. **We also clarify that in our case denoising is not related to randomized smoothing but means handling noise in the dataset when training DMs, thereby denoising the data distribution.** While many of such methods were originally designed for classifiers, ours is specifically designed to make the DM robust in the generation. Following rev. `riys`, we now highlight adversarial purification (AP) methods that fine-tune generative models used for purification for improved robustness and cite [1], as their objectives align with our work despite differences in how adversaries are generated [1,E,F]. In the context of AP, recent work [G] shows that adversarial perturbations disrupt the generative process of diffusion models, causing deviations from the clean trajectory. While our current focus does not extend to adversarial purification, we note that enforcing local smoothness in the score field might help mitigate such deviations. In fact, interpreting our work along with a classifier is very interesting because this could lead to future work applying our Robust DMs to AP. Due to space constraints, here we list only a few key references but will be happy to add any you suggest in the next discussion phase. [C] (Certified!!) Adversarial Robustness for Free! ICLR 2023 [D] Denoising Masked Autoencoders Help Robust Classification ICLR 2023 [E] Towards Understanding the Robustness of Diffusion-Based Purification: A Stochastic Perspective ICLR 2025 [F] ADBM: Adversarial Diffusion Bridge Model for Reliable Adversarial Purification ICLR 2025 [G] MimicDiffusion: Purifying Adversarial Perturbations via Mimicking Clean Diffusion Models CVPR 2024
Summary: This work endeavors to construct a novel adversarial training approach for diffusion models. Through comparing the AT process of traditional classification models, the author suggests that the crucial key of DMs AT resides in equivariance. Consequently, the perturbation process and adversarial training loss in the reverse process are derived. After verification on CIFAR-10 data, the effectiveness of the model is demonstrated. Claims And Evidence: The authors have essentially provided sufficient evidence to support their motivation. However, there is one aspect that I am not entirely clear about: How exactly does Figure 2(a) reflect robust information? What information do the colored line clusters signify? Particularly in the case of Strong Inliers in the upper portion, I did not detect any obvious distinction between DDPM and Robust adv. Methods And Evaluation Criteria: 1. In Formula 7, δ is utilized in Algorithm 1. However, δ_{adv} in Formula 8 does not make an appearance in the algorithm. So how δ_{adv} functions. 2. In "Key change is equivariance," the author proposed a new AT loss expression as ||eps_theta(x_adv)-(eps+delta)||. Additionally, the loss expression at the end of formula 10 becomes ||eps_theta(x_adv)-(eps_theta+delta)||. Is there any inconsistency between eps and eps_theta? 3. I did not observe any references to the attack method employed in Algorithm 2. Is this an attack algorithm constructed in this work? Does there exist a public and normally used method for conducting adversarial attacks on diffusion? Can the authors conduct verification on it? Theoretical Claims: Proofs for theoretical claims are complete. Experimental Designs Or Analyses: 1. This work has not been validated on more complex datasets like Imagenet or TinyImagenet. Is it possible to attain effective robustness on these datasets by fine-tuning or AT on such datasets? 2. Table 2 shows that the performance of the author's method is substantially lower than that of the baseline, DDPM, and DDIM or even lower than Adversarial Perturbation performance. In traditional AT, although it may lead to a certain decline in clean performance, the clean performance is typically still higher than that of adv. Does this work lack generalization to clean samples? 3. The author focuses on the evaluation process in Chapter 3 of the paper. In Chapter 3 of the Appendix, does the author's training process involve direct initialization training or fine-tuning on an existing diffusion model? How does the cost of AT compare to ordinary training? Can it be compared in terms of time or computation? Supplementary Material: I checked the Attack formulation, the forward and backward derivation of diffusion, and the comparison of training results under different noise conditions. Relation To Broader Scientific Literature: This work provides a robust diffusion model training method, which may be helpful for improving the robustness of the widely used stable diffusion. Essential References Not Discussed: The paper's references in the work are sufficient. Other Strengths And Weaknesses: The author's presentation of the experimental results and insights is highly professional and elegant. Although a sufficient number of subjective and objective experiments have been provided on CIFAR-10, there is still a relative dearth in the selection of natural image datasets (with experiments only being conducted on CIFAR-10). Other Comments Or Suggestions: There is no other comment. Questions For Authors: 1. Demonstrate the effectiveness of the method on more complex datasets. 2. Clarify the description of the method. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the feedback and constructive criticism. We are glad the reviewer acknowledged that we provided sufficient evidence to support our motivation, our method could be helpful for the future robustness of SD and our experimental results and insights are highly professional and elegant. Below we respond to the remaining remarks: **1. Per reviewer request, below we provide clarification to the description of the method.** > How exactly does Figure 2(a) reflect robust information? As mentioned at L.74-77 of our contribution, we are the first to reconnect AT for DMs as a denoising method, so Fig.2(a) shows how DMs, trained with AT, behave when their training is affected by data noise, e.g., points with either strong inlier noise or uniform outliers. DDPM is attracted by noisy points and generates both the plane *and* the noise whereas ours generate *only* the plane *despite being trained on the same noisy points*. > What information do the colored line clusters signify? The colored lines are the 3D trajectories of the generated points w/ Eq.(5). Starting from $\mathbf{x}\_0 \sim \mathcal{N}(0,1)$, the trajectories flow toward the learned $p_{\theta}(\mathbf{x})$. Each color is a different sampling. We will add this explanation to L.132 second column when citing Fig.2(a). > *Fig. 2(a)-”Strong Inliers”*, no distinction DDPM vs Robust Zooming in, DDPM generates trajectories going behind or out of the plane, i.e. cyan-colored trajectory in the *opposite direction of the plane*. Ours does not produce those diverging trajectories, better reaching the center of the plane. **Trading variability for robustness**: while we obtain robustness, we capture less variability that is in reality present in the data distribution. In other words we trade off variability for smoothness. A similar logic is mentioned in the caption of Fig.2(b) at L.124. > In Formula 7, δ is utilized in Algorithm 1. However, δ_{adv} in Formula 8 does not make an appearance in the algorithm. So how δ_{adv} function? Thank you. This is a typo on our side that we will fix. L.227 of Algo.1 should be using $\boldsymbol{\delta}\_{\text{adv}}$, optimized as Eq.(8). We also need to update Eq.(6) with delta with adv. In a nutshell, delta with adv. represents the perturbation *after* the gradient step, at the end of optimization while delta is the random initialization. > Is there any inconsistency between eps and eps_theta? At L.211-right there is a single loss that regresses both \eps and \delta, doing so we enforce equivariance yet not smoothness in the prediction, for smoothness you need \eps to be \eps_theta so that is a function of both weights and input x_t. In this sense, the regularization–Eq.10–can be seen as two outputs of the same network interacting together to enforce smoothness (L.233-234). We will clarify this in the revision. > Is Algo. 2 constructed in this work? There exist normally used attacks on DMs? The attack is constructed in this work as an analytical tool for the worst-case robustness of DMs. We verified and in contrast to existing attacks that compute a single input perturbation and thus perturb the DM trajectory in an “averaged” sense, ours computes an optimal perturbation at every time step that maximizes the DM loss. It demonstrates that there exists a sequence of perturbations that could maximally disrupt the trajectory at each inference step. Such an analytical framework is valuable for understanding the fundamental sensitivity of the generative process. Existing attacks have been predominantly developed for latent DMs, and do not transfer well to pixel-based DMs [B]. > Although traditional AT may lead to a certain decline in clean performance, the clean performance is typically still higher than that of adv. Does this work lack generalization to clean samples? The clean performance is better than under attack in our case too. If we attack a good number of time steps, the FID is worse than without attack–see Fig. 7. For generalization without attack, please see the discussion above on **trading variability for robustness**: Tab. 2 is without attacking the DM, in that case we have better FID (~24) when we train with data corrupted at sigma=0.1 because we smooth out the noise whereas in the case of clean data (sigma=0), we smooth some “part” of the data distribution (FID ~28). **2. Fine-tuning vs full training; Computational cost.** We always trained from scratch thereby avoiding fine-tuning. A time complexity overhead estimate is given at L.298-305. DDPM operations are a single forward for getting the prediction and a backward pass for weights update. With our regularization, we have to add a backward pass for getting the gradients of the adversarial loss over the perturbation, then repeat the DDPM operations above. **3. Effectiveness on more complex datasets.** Please see response to rev. `a6SF`. [B] Pixel is a Barrier: Diffusion Models Are More Adversarially Robust Than We Think, NeurIPSW 2024
Summary: This work studies adversarial training for diffusion models, highlighting its fundamental differences from adversarial training for classifiers. Unlike adversarial training for classifiers enforcing invariance, adversarial training for diffusion models requires equivariance to ensure the diffusion process remains within the data distribution. The proposed method enforces smoothness in the diffusion flow and aims to improve robustness to noise, outliers, and data corruption without relying on specific noise assumptions. Experimental results also demonstrate its effectiveness. Claims And Evidence: Yes, in general, the claims are supported by clear and convincing evidence. However, there are two claims not being articulated with clarity. 1. *Claim 1*: Classifiers require invariance under adversarial perturbations, but DMs demand equivariance so that the diffusion process remains within the data distribution. *Evidence 2*: Section 2.4 provides the analysis and the explanations for evidence. However, they are not strong enough theoretically and empirically. The performance of the proposed method is not compared with the naive adversarial training of diffusion models as stated in Equation (9). 2. *Claim 2*: The proposed adversarial training method makes diffusion models more robust to adversarial attacks. *Evidence 2*: Experimental design in Section 4 to evaluate this robustness is not comprehensive enough. There is only on white-box attack: FGSM applied, where the other common but complicated adversarial attacks are ignored, such as PGD and AutoAttack. Hence *Claim 2* is not convincingly supported. Methods And Evaluation Criteria: As stated in **Claims And Evidence*, the proposed adversarial training seems to partially make sense for diffusion models, in terms of generative power improvement. It is natural and well articulated with changed similarity that adversarial training can handle the memorization issue in diffusion models. However, I am still not convinced that the robustness can be evaluated by FID and IS when defending against adversarial attacks. Please see the second question in **Experimental Designs Or Analyses**. Theoretical Claims: There is no theoretical claim in this work. Experimental Designs Or Analyses: To the best of my knowledge, I find the experimental design not very sound. Although it evaluates the performance of the proposed algorithm to apply adversarial training in diffusion models for denoising, why are the adversarial attacks are chosen to be FGSM, as stated in Appendices? If the adversarial training is via PGD, does it change the performance? Also, I am wondering if diffusion models are successfully attacked, where the generated images are all wrongly classified, will the FID and IS be influenced? Supplementary Material: Yes, I reviewed the supplementary material for Sections 1 on the extra preliminaries and Sections 2 and 3 on the qualitative evaluation. All the content seems to be valid. Relation To Broader Scientific Literature: This paper provides contributions to robustness, and the generative power diffusion model. It challenges existing understanding and methods for adversarial training via an equivariance-based approach. It also demonstrates empirical improvements in robustness and memorization reduction. However, theoretical justifications and evaluations on stronger attacks would strengthen its impact in the broader scientific context. Essential References Not Discussed: When discussing the adversarial training for diffusion models, especially for enhancing the robustness, one key paper should be discussed on adversarial training for diffusion on purification (AToP) [1] which is also on the similar task, in the introductions. Hence the comprehensiveness of this work can be improved. [1] G. Lin, C. Li, J. Zhang, T. Tanaka, and Q. Zhao, Adversarial training on purification (AToP): Advancing both robustness and generalization, ICLR2024. Other Strengths And Weaknesses: **Strengths**: 1. This work proposed a novel adversarial training method for diffusion models to enhance the robustness and generation quality via adversarial perturbations induced in the forward and reverse processes. It is an original idea which is not sufficiently studied before. 2. The significance of this work is also a highlight. It bridges the generative power and robustness in the training process of diffusion models. Hence it is possible to pave the way for future studies in both areas. **Weaknesses**: 1. The logic of this work definitely requires to be polished. Hence the clarity will be enhanced in terms of the coherence. As aforementioned, the major claims should be supported by various experimental results or theoretical guarantees. Other Comments Or Suggestions: NA Questions For Authors: Please see **Experimental Designs Or Analyses**. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the feedback and constructive criticism. We are glad the reviewer appreciated the novelty and original idea that could pave the way for future studies and the significance of our work. Below we respond to the remaining remarks: **1. Clarification for rev. `riys` and rev. `a6SF`: Our attack on DMs disrupts generation, not a classifier.** We remark that the attacks in our paper are in the trajectory space of DM to disrupt their generative capabilities, not to fool a classifier. See Sec 3.4. Thus, we have chosen SOTA metrics appropriate for DMs (FID/IS). **2. Invariance.** It was our first failed attempt. We report comparisons across three datasets showing that invariance–replacing the regularization part in Eq.(10) with invariance Eq.(9)–makes the model diverge from the real data distribution $p_{\text{data}}(\mathbf{x})$, actually learning the noise. Results are shown below. We provide [qualitative results showing the diffusion trajectories of Robust DM trained **with invariance** vs DDPM](https://bashify.io/i/cZymJF), showing invariance actually performs worse than DDPM. Our method is shown in Fig. 2(a) and Fig. 5. We have [updated Fig. 3(b) with invariance, also trying different $\lambda$ in Eq. (10)](https://bashify.io/i/8a6Rl5) and we will add it to the improved version of the paper. Finally, we confirm that the FID on CIFAR-10 with invariance is 356.9,much worse even than DDPM. Fig. 1 in supp. material shows that invariance leads the model to learn a different vector than the desired one. Moreover in Sec. 1.3 of supp. material, the ELBO derivation provides the theoretical explanation supporting the need for equivariance in the training. **3. Robustness to other attacks.** We ran an iterative attack on DM trajectories, i.e. PGD with 20 iterations. The table **FID under FGSM** below reports Fig.7 in the paper whereas **FID under PGD** reports that with PGD 20 iterations. We see that despite iterating the attack with PGD, ours still maintains a wide gap in the FID compared to the baseline DDPM and PGD does not increase the FID much. **FID $\downarrow$ under FGSM** (like Fig. 7) | %time steps attacked $\rightarrow$ | 25 | 50 | 75 | 100| |--|--|--|--|--| | DDPM | 49.8 | 131.7 | 190.4 | 243.4 | | Robust$_{\text{adv}}$ | **19.29** | **52.0** | **90.7** | **127.7** | **FID $\downarrow$ under PGD** (new) | %time steps attacked $\rightarrow$ | 25 | 50 | 75 | 100| |--|--|--|--|--| | DDPM | 55.7 | 134.5 | 200.3 | 248.1 | | Robust$_{\text{adv}}$ | **22.7** | **55.8** | **98.1** | **128.6** | Regarding the use of black-box or AA: adapting attacks like SQUARE or FAB-T from classifiers to DMs is very interesting but out of the scope of the paper since they need to be “ported” to DM in a non-trivial way. Nevertheless we will update our future work to mention this line of research. While some works in adversarial purification (AP) focus on attacking simultaneously DMs and classifiers, these approaches are tailored to classification. In contrast, we study the robustness of generative modeling capabilities of DMs rather than targeting their role in purification. **See point 1.** More information in the reply to rev.`a6SF` at **point 2.** **4. Why FGSM and will using PGD change the performance?** As we stated in Sec. 3.4, Alg. 2, and appendix A.1 crafting attacks for DM requires carefulness. Note that we implemented **FGSM with random start**--Eq.7. FGSM was chosen because it was the simpler and faster attack, since training with PGD is too computationally expensive for large datasets. Secondly, papers like [A] suggest that the addition of random start could make it resistant to multi-step attacks (PGD). Nevertheless, following this remark, we will mention PGD-training as future work in Sec. 5. **5. Discussion and References on Adv. Purification and Denoising for AT.** Following your remark and also rev. `a6SF`, we will add a minisection at the end of Section 4. We will cite and discuss the suggested paper [1]. Please see the response to reviewer `a6SF`. Remaining questions: > if diffusion models are successfully attacked, where the generated images are all wrongly classified, will the FID and IS be influenced? In the context of DMs, in this case considered to be **unconditional**, a successful attack disrupts model generation (it generates nonsensical images **see point 1**). Some samples are provided in Fig. 7 bottom. To reply to the question: if DMs are successfully attacked, they generate noise-like images, thereby disrupting both FID/IS and also classification. The case where DMs generate images to fool a classifier and FID and IS stay the same is not considered in our work but could be future work using ours as a purifier. > still not convinced that the robustness can be evaluated by FID and IS when defending against adversarial attacks Please, see the reply above in **point 1.** [A] Fast is better than free: Revisiting adversarial training, ICLR 2020 --- Rebuttal Comment 1.1: Comment: I thank the authors for their explanations. All my questions and concerns are addressed. I would like to increase my score by 1. --- Reply to Comment 1.1.1: Comment: We are sincerely grateful for your time and insightful feedback. We are pleased that the revisions contributed to improve the paper, and we greatly appreciate the increased score.
null
null
null
null
null
null
null
null
Latent Thought Models with Variational Bayes Inference-Time Computation
Accept (poster)
Summary: This paper presents a novel method called latent-thought language model (VTM) for autoregressive language modeling. VTM introduces an additional family of scaling dimensions, latent thought vectors, to implicitly learn a sequence representation and guide the generation. Training VTM requires sampling from the variational posterior of the latent vector, which is done by MCMC or variational Bayes. The resulting method enjoys a good scaling property by achieving a better perplexity score or sample accuracy with a smaller number of tokens and model sizes. Claims And Evidence: The claims of this paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: There are some potential issues with the implementation of the method. - The authors incorporate a latent variable $z$ in each layer of the transformer, as illustrated by Figure 2. This makes the mask attention in classical autoregressive models not applicable, and will require $L$ NFEs to evaluate the probability. - An additional variational posterior model $q_{\theta}(z|x)$ is employed to train the VTM model, which may introduce at least 2x memory. By the way, the authors do not explicitly explain how they design $q_{\theta}(z|x)$. - The authors state that they use cross-attention to incorporate $z$, which is still vague for me. I suggest the authors use mathematical expressions or illustrations to explain how the generative model and the variaitonal posterior model are designed. Theoretical Claims: There are no the theoretical claims in this paper. Experimental Designs Or Analyses: - It is not sure that the model indeed learns informative latent vectors. The authors could demonstrate the posterior distribution of $z$ given a sequence $x$; For example, $z$s may have a large correlation if $x$s have the similar semantic meaning. - No computational cost in terms of training times or memory are reported. Supplementary Material: I've checked the supplementary material. Relation To Broader Scientific Literature: I have no other comments. Essential References Not Discussed: No Other Strengths And Weaknesses: I have no other comments. Other Comments Or Suggestions: I have no other comments. Questions For Authors: I have no other comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *Thank you for your thoughtful review acknowledging the novelty of our work. We appreciate your recognition that our claims are supported by clear and convincing evidence. We will address your concerns as follows.* **1. Clarification on the cross-attention mechanism.** We wish to humbly clarify a potential misunderstanding regarding the reviewer's comment that incorporating $z$ makes "mask attention in classical autoregressive models not applicable." Our Latent Thought Models (LTMs) adopt the decoder Transformer in the original transformer paper of (Vaswani et al. 2017.), where each layer $l$ of the decoder transformer consists of (1) self-attention sub-layer with attention of $x_t$ to $x_{\leq t}$ (masked), (2) cross-attention sub-layer with attention of $x_t$ to $z_l$, and (3) feed-forward sub-layer. The cross-attention sub-layer (2) for each layer $l$ is ${\rm softmax}\left(\frac{Q_{h_l}K_{z_l}^\top} {\sqrt{d}}\right)V_{z_l},$ where $h_l$ is the hidden embedding of the input $x$ and $z_l$ consists of the latent thought vectors at layer $l$. Crucially, the latent vectors $z$ are inferred by gradient-based variational inference. This means there is no network that maps from $x$ to $z$. It might be a slight misunderstanding that LTMs "will require $L$ NFEs to evaluate the probability", if $L$ means the number of layers. Given a latent $z$, evaluating $p(x|z)$ only needs 1 NFE. The reviewer correctly notes additional computation is needed, but this is for approximating the posterior $p(z∣x)$ or evaluating the ELBO. This variational inference requires $T_{fast}$ NFEs, scaling with the inference steps $T_{fast}$, not layers $L$. **2. Memory requirements of additional variational posterior model (possibly 2x)?** We wish to respectfully clarify that the 2x memory requirement regarding our additional variational model may be a slight misunderstanding. The classical variational Bayes of our LTMs does not require a separate "variational encoder model" - it only needs the local parameters $(\mu, \sigma)$. The memory overhead for these parameters is almost negligible, with our LTM-L incurring just 1.5% additional memory cost on H100 compared to same-sized AR models with the same batch size. This is more efficient than the encoder (inference model) used in amortized variational inference (VAEs), which typically requires the same memory as the decoder. **3. Demonstration that model learns informative latent vectors (e.g., posterior distribution analysis)** Following your advice, we conduct in-depth analysis of our latent thought vectors through progressive latent inclusion experiments as [probing results (link)](https://anonymous.4open.science/r/icml25_submission3398_probing-F2BB/). Our findings reveal: (1) Clear hierarchical latent representation in LTMs, with higher layers integrating information from lower ones. LTM-M shows a significant accuracy jump at the final layer (55% to 100%), while LTM-L demonstrates more gradual information distribution with a notable improvement at layers 9-10 (65% to 95%). (2) Qualitative text reconstruction reveals progression from basic grammatical elements (22% at layers 1-3) to structural patterns (30% at layers 1-6) to coherent semantic content (65% at layers 1-9) to complete sentence (99% at layers 1-10). **4. Computational cost reporting (training time, memory usage)** We reported the trainFLOPs/tok and the number of training tokens in Table 1 of our submission. Here we provide other costs such as training time and memory usage. | Model | Parameters | Memory Usage | Time per Epoch | Gradient Accumulation | |-------|------------|--------------|----------------|------------------------| | GPT-2-L | 762M | 52214 MiB | 924.63 ms | 4 | | GPT-2-M | 345M | 62706 MiB | 453.71 ms | 2 | | LTM-M (steps=16) | 51M | 59040 MiB | 2209.76 ms | 1 | | LTM-L (steps=64) | 76M | 68158 MiB | 13190.58 ms | 1 | All benchmarks were conducted on a single node with 8 H100 GPUs with batch size 512. In our approach, we trade compute for slow learning of global parameters for fast learning of local parameters (line 242-246), which is essential to LTM's data efficiency. *We will revise our paper based on your valuable feedbacks. Thank you!*
Summary: This paper proposes a probabilistic language models called Latent-Thought Language Models (LTMs), which introduce explicit latent vectors to layers of transformers. The authors claim that this setup yields new “scaling dimensions” beyond traditional LLMs, allowing more efficient use of training compute per token. They report gains in both sample efficiency and parameter efficiency, as well as improved validation perplexity, zero-shot language modeling, and conditional/unconditional text generation. Claims And Evidence: - Supported Claims: The paper’s experiments compare LTMs with standard autoregressive models and discrete diffusion models, showing quantifiable improvements in perplexity and sample efficiency—these results appear well-supported by the reported metrics. - Potentially Under‐Supported Claims: Latent Thought Vectors as a “Language of Thought” The cognitive analogy is intriguing but not strongly backed by empirical studies. Multiple Inference Steps The statement that “increasing inference steps” yields better performance seems reasonable, but the exact algorithm (and how it differs from typical amortized VI) is not fully explained. Why is it necessary to infer for multiple steps? Methods And Evaluation Criteria: - Methods: The authors’ methodology of adding latent vectors to a Transformer decoder, learned via a variational approach, makes sense. Evaluation Criteria: Validation Perplexity and zero-shot performance are standard metrics for assessing language models, so these are appropriate. Additional benchmarks (e.g., text generation tasks, in-context reasoning tasks) improve the assessment. Theoretical Claims: N.A. Experimental Designs Or Analyses: - Experimental Soundness: The overall design—comparing against standard LLMs and a discrete diffusion baseline—makes sense. Reporting perplexities and a few-shot test is appropriate. - Potential Weakness: The paper does not sufficiently detail hyperparameter choices or the exact inference procedure (especially the “multiple inference steps”), potentially leaving some designs unclear. The authors claim that latent vectors store global context, but there is no direct analysis (like an attention map or latent dimension probe) confirming that they indeed capture such information. Supplementary Material: Yes, it contains experimental details. Relation To Broader Scientific Literature: Original research seems to incorporate VI with pre-trained models for fine-tuning, while this work investigates training starting from scratch. The significance of this work becomes the engineering part. However, the code and implementation are not provided. Essential References Not Discussed: No. Other Strengths And Weaknesses: -- Key Strengths: - empirical gains in perplexity and few-shot reasoning. - Offers new “scaling dimensions” and suggests paths for more efficient training or inference. -- Key Weaknesses: - Vague details about the practical inference procedure and reproducibility. - Some conceptual leaps (e.g., short windows forcing global information into latent variables) without robust empirical backing; the relation between the method design of this work and cognitive concepts ) Other Comments Or Suggestions: N.A. Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: *Thank you for your constructive feedbacks. Below are our responses.* **1. Comparison to amortized VI and clarification on multi-step inference** We wish to humbly clarify a possible misunderstanding in terms of the Variational Inference (VI) in our Latent Thought Models (LTMs). LTMs employ the classical variational Bayes (VB) framework rather than amortized inference (as in variational autoencoders (VAEs)). As detailed in Sec. 2.2 and Algorithm 1 (line 165-183), for each training sequence $x$, the posterior $p(z|x)$ is approximated by a variational distribution $N(\mu, \sigma^2)$. Note that $(\mu, \sigma^2)$ are local parameters specific to each sequence $x$, and are optimized by maximizing the evidence lower bound (ELBO) for each $x$. To clarify the reviewer's comment about "multi-step inference": it refers to the iterative update of $(\mu, \sigma^2)$, where the number of steps is the number of gradient ascent steps for maximizing the ELBO for each $x$. In contrast to our VB approach, in amortized inference all the training sequences share an encoder (or inference model) with global parameters $\phi$. The encoder maps each $x$ to the corresponding $(\mu, \sigma^2)$ directly. There is no iterative update in typical amortized encoders. Our ablation study in Sec.3.4 confirmed that classical VB achieves better ELBO and avoids VAE's potential posterior collapse problem. **2. Implementation and hyperparameters** We described our inference algorithm on page 4, Algorithm 1. We discussed the key hyperparameters (`inference steps` and `latent size`) in Sec. 3.2. Other hyperparameters can be found in Appx. 1 & 2. To further address your concern, we have built anonymous links for both [pseudocode (link)](https://anonymous.4open.science/r/icml25_submission3398-F84E/) and [executable code (link)](https://anonymous.4open.science/r/icml25_submission3398_code-4F9D). **3. About novelty of our work regarding variational inference** To our knowledge, our work on incorporating classical VB in language models is novel, as confirmed by Reviewer FqJd and Reviewer ReTi. We are unaware of prior work using classical VI for fine-tuning LLMs. If we are provided with any related references, we would be happy to cite them in revision. The explicit latent abstraction in our model can be an important direction to explore for language modeling. **4. Empirical backing of cognitive analogy of "Language of Thought"** Thanks for your comment. Our model was conceptually inspired by the "Language of Thought" model in cognitive science. However, there has not yet been a machine learning model for it. Our work may be considered the first step towards such a model. To further address your concern, we conduct in-depth analysis of our latent thought vectors through progressive latent inclusion experiments as [probing results (link)](https://anonymous.4open.science/r/icml25_submission3398_probing-F2BB/), where our findings reveal:(1) Clear hierarchical latent representation in LTMs, with higher layers integrating information from lower ones. LTM-M shows a significant accuracy jump at the final layer(55% to 100%), while LTM-L demonstrates more gradual information distribution with a notable improvement at layers 9-10(65% to 95%). (2)Case study reveals that lower layers capture grammatical elements and higher layers encode increasingly complex semantic content. **5. Direct analysis confirming latent vectors capture global information** In lines 149-164, the latent vectors $z$ controls the generation of each token in $p(x_{t}|z, x_{<t})$ via cross attention, and $z$ is inferred from the whole sequence $x$: $p(z|x) \propto p(z) \prod_t p_\beta(x_t|z, x_{<t})$, where $\prod_t$ is over all $t$. Empirically, we validate this by reconstruction experiments where we first infer $z$ from the sequence $x$ and then generate the sequence using only these inferred $z$ (without conditioning on ground truth context). This yields 100% reconstruction accuracy on the OpenWebText validation set, confirming that $z$ successfully captures the global information necessary for generation. This finding is further supported by our probing results across different models (link is mentioned by response to point 4). **6. Empirical support for conceptual claims (short windows forcing global information into latent variables)** In our response to your point 5 above, $z$ controls the generation of each $x_t$, and is inferred from the whole sequence $x$. With the short context window, the generation of $x_t$ must rely on $z$ to provide information beyond the short window. Empirically, our inferred $z$ can reconstruct the whole sequence accurately as mentioned above. Our probing results (link is mentioned by reponse to point 4) further confirm that the inferred $z$ captures the global information. *We will improve our paper based on your helpful feedbacks. Given our responses above, we humbly request you to reconsider your rating. Thanks!*
Summary: This work presents novel Latent-Thought Language (LTM) class of models, where the explicit latent vector is introduced to guide the generation of tokens. The model is optimized withing the variational bayes framework, using faster learning rate for latent vector distribution parameters and slower rate for learning decoder parameters. Empirical evaluation against autoregressive and diffusion models highlighted few results: increased parameter efficiency, improved validation perplexity, and showing the few shots in-context reasoning capability. ## update after rebuttal After reading other reviews and author responses, I decided to keep my initial grade. Claims And Evidence: The claims of increased parameter efficiency, improved validation perplexity, and few shots in-context reasoning capabilities are in general well supported by empirical evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are appropriate for the problem. Well know benchmark sets are used: OpenWebText for pretraining, seven open datasets for zero shot unconditional tasks and GSM8k for few shots evaluation of arithmetic reasoning. Theoretical Claims: N/A Experimental Designs Or Analyses: Experiment design and analysis appear sound. Ablation is performed. Appropriate metrics like perplexity and MAUVE scores are used. Supplementary Material: No Relation To Broader Scientific Literature: Main idea is inspired by declarative procedural model (Ullman, M.T. Contributions of memory circuits to language: The declarative/procedural model. Cognition,92(1-2): 231–270,2004.) and language of though (Fodor, J.A. The Language of Thought. Harvard University Press,1975.). Most technical details are related to variational Bayes literature. Essential References Not Discussed: Other flavors of Large language Models like Mixture of Experts Models as well as State Space Models were not considered or discussed. Jiang, Albert Q., Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot et al. "Mixtral of experts." Gu, Albert, and Tri Dao. "Mamba: Linear-time sequence modeling with selective state spaces." Other Strengths And Weaknesses: Strong points: * The idea seems quite novel * Strong performance against GPT2, SEDD, MDLM and MD4. Weak points * Scalability might be difficult * While overall clear, the paper is "dense" at some moments. And certain phrases require more explanation, for example what is referred by "lifted latent space" Other Comments Or Suggestions: Typos: Page 2, right column, line 109: "is not as good as have separate sets" Questions For Authors: 1. How expensive is the inference, and how does the inference latency compares to other LLM approaches? 2. Were all compared models using the same GPT2 tokenizer? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: *Thank you for your insightful comments and for recognizing the novelty and strong performance of our work. We address your concerns point-by-point as follows.* **1. Scalability.** Our Latent Thought Models (LTMs) scale along two primary dimensions: model size and the number of inference steps ($T_{fast}$). This parallels scaling model size and scaling the length of CoTs in Auto-Regressive (AR) models. Crucially, as discussed in Sec. 2.5, the FLOPs/token cost from $T_{fast}$​ scales linearly, similar to CoT length cost. Therefore, LTMs pose no fundamental asymptotic barriers to scaling beyond those of current LLMs. In practice, we do acknowledge the engineering challenge of building robust infrastructure for large-scale gradient-based Variational Inference (for pre-training and deployment) and designate this as future work. **2. Inference expense and latency compare to other LLMs** Our analysis in Sec. 2.5 indicates that the inference expense (FLOPs/token) scales linearly with the number of inference steps, $T_{fast}$​. To offer a concrete comparison: performing 16 inference steps with our LTM-M model has latency comparable to generating ~8.7 CoT tokens in the iso-trainFLOPs/tok AR baseline, GPT2-L. It is also necessary to highlight that practical inference expenses vary significantly depending on the task. While likelihood estimation (Table 1) processes the full sequence length, conditional generation (Fig 6 & Table 2) benefits from typically shorter prefill contexts. Furthermore, for unconditional generation (Table 3), there is no gradient backpropagation in LTMs at inference time, which may lead to considerable speedups compared to other LLM baselines whose decoder networks are significantly larger. **3. Were all compared models using the same GPT2 tokenizer?** Yes. (line 612 in Appendix) **4. Explanation of "lifted latent space"** Thank you for the feedback. "Lifted" is synonymous to "latent" in our context, which is meant to emphasize abstraction from "ground" tokens. We shall make this clear in revision. **5. Typo on page 2: "is not as good as have separate sets"** Thanks. We will fix it in revision. **6. Discussion of other language models like Mixture of Expert(MoE), State Space Model(Mamba) etc.** We will cite these LLM architectures in revision and discuss building our LTMs based on these architectures in future work. Our cross-attention layers that incorporate $z$ can be naturally inserted into these architectures. Since at each cross attention layer, $z_l$ is compact (consisting of only a small number of vectors), the cross-attention is very light, and this is actually quite consistent with the attention-less state vector in Mamba. We shall explore this direction in future work. *We will revise our paper based on your deeply insightful comments. Thank you for your positive evaluation!*
null
null
null
null
null
null
null
null
Diffusion on Language Model Encodings for Protein Sequence Generation
Accept (poster)
Summary: The paper introduces DiMA, a latent diffusion framework that works on protein language model representations. While protein sequence design has advanced with discrete and autoregressive methods, the potential of continuous diffusion has been under-explored. DiMA is developed through a systematic exploration of architectural choices and diffusion components, enabling it to generalize across multiple protein encoders with 8M to 3B parameters. It achieves high performance across different types of protein representations like sequence-only, dual-decodable, and multimodal ones using the same architecture and training approach. DiMA is extensively evaluated against existing methods using multiple metrics across two protein modalities. It generates novel, high-quality, and diverse protein sequences, outperforming baselines in various tasks such as unconditional generation, family-specific design, motif scaffolding, and fold-conditioned generation. With only 35M parameters, it shows strong performance on multiple benchmarks. DiMA also demonstrates versatile functionality in conditional generation tasks. This work offers a universal continuous diffusion framework for protein sequence generation, providing both architectural insights and practical applications across diverse protein design scenarios. Claims And Evidence: All claims are well supported by convincing evidence and experiments. Methods And Evaluation Criteria: Although the method is directly built on existing latent diffusion methods and protein language models, which may limit its novelty, I find it conceptually strightforward and useful in practice, even generalizable to sequence-structure co-design. I especially like the author's comprehensive ablation study and exploring the training and inference strategies for diffusion models instead of directly copying parameterization from image domain, which was commonly done in previous works. Theoretical Claims: N/A Experimental Designs Or Analyses: The evaluation of the method is thorough and convincing, covering different aspects of the generated samples and extending to more interesting conditional tasks. I believe the experiments are convicing enough to demonstrate the advantages of the proposed method. Supplementary Material: I only quickly go through the supplementary material. Relation To Broader Scientific Literature: See Summary. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: 1. The formatting of the paper requires improvement. For instance, the titles of the paper and its subsections should start with uppercase letters. An example would be "Diffusion on Language Model Encodings for Protein Sequence Generation." 2. Line 51: "We demonstrate that continuous diffusion on protein embeddings enables effective sequence redand structure generation across multiple tasks and encoder architectures." typo: "redand" -> "and" Questions For Authors: 1. Line 98: I'm wondering whether the normalize is applied over the length dimension s or the hidden dimension d. From the last sentence, it seems that the normalization is over the length dimension, which seems weird for generation. 2. Do the authors also use ESM-2 decoder for diffusion on CHEAP and SaProt representations? 3. What's the rationale behind the heurisitc approach that the reconstruction loss should exhibit an approximately linear increase over diffusion time? 4. Typically, the quality of generated samples from diffusion models can be controlled by adjusting the temperature. I'm wondering how the authors set the temperature values in different experiments and for different baselines. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review and positive assessment of our work. We appreciate your recognition of our systematic ablation studies and approach to developing protein-specific diffusion parameterizations rather than simply adopting techniques from the image domain. Below, we address each point raised. > **C1, C2.** Formatting and typos. Thank you for pointing out the formatting issues and typos. We will fix the formatting inconsistencies, including proper capitalization of titles and section headings. > **Q1.** Line 98: I'm wondering whether the normalize is applied over the length dimension $s$ or the hidden dimension $d$. From the last sentence, it seems that the normalization is over the length dimension, which seems weird for generation. We apologize for the unclear description. The normalization is applied over the hidden dimension $d$ rather than the sequence length dimension $s$. Specifically, for each component $d_i$, we precompute the mean and variance over the training data and then apply normalization using these statistics to achieve zero mean and unit variance. We will clarify this in the revised manuscript. > **Q2.** Do the authors also use ESM-2 decoder for diffusion on CHEAP and SaProt representations? We use the same approach across all encoder architectures. For CHEAP and SaProt representations, we fine-tune their corresponding pretrained decoders alongside the diffusion model, similar to our approach with ESM-2. We found that low-dimensional embeddings (e.g., ESM-2 8M, $d$=320) are less robust to small perturbations than higher-dimensional ones (ESM2/SaProt 650M, $d$=1280, CHEAP, $d$=1024). Fine-tuning the decoders helps minimize these effects during diffusion generation. > **Q3.** What's the rationale behind the heurisitc approach that the reconstruction loss should exhibit an approximately linear increase over diffusion time? The key issue with standard noise schedules (linear, cosine) is that they corrupt data very gradually at small timesteps. This means the model spends significant training resources on nearly trivial denoising tasks where input and target are almost identical. Our approach ensures the difficulty of the denoising task increases steadily with each timestep. By designing a noise schedule where reconstruction loss grows linearly with diffusion time, we create learning problems of incrementally increasing difficulty, allowing the model to make consistent progress throughout training rather than facing an abrupt jump in task complexity. > **Q4.** Typically, the quality of generated samples from diffusion models can be controlled by adjusting the temperature. I'm wondering how the authors set the temperature values in different experiments and for different baselines. For baseline comparisons, we use the sampling parameters recommended by the original authors of each method. For autoregressive models like ProGen2 and ProLLAMA, which show suboptimal quality and collapse to highly repetitive sequences on default settings, we performed grid searches to identify optimal temperature and top-p values. DiMA has two analogous parameters for navigating the quality-diversity trade-off: the number of generation steps and self-conditioning rate. The dependencies of quality, diversity and novelty on these parameters are shown in (figures 2 and 3 on https://tinyurl.com/icml25re). --- We look forward to incorporating your feedback in our final version and thank you again for your time and insightful comments. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response, which addresses most of my concerns. For Q3, I'm a little concerned about the saying that small noise is nearly trivial - this requires the model to learn the fine-grained denoising capability. --- Reply to Comment 1.1.1: Comment: When working with protein encodings, reconstruction at small noise levels turns out to be quite robust. For example, testing DiMA with CHEAP representations (which enables dual-decoding into both sequence and structure) we observe that at $t$=0.05, sequence reconstruction accuracy remains 100%, and structural RMSD stays below 0.2Å (figures 5 and 6 at https://tinyurl.com/icml25re). Our schedule leverages this robustness by allocating more training to challenging noise levels rather than the nearly lossless stages with $t$→0.
Summary: This paper introduces DiMA, a continuous latent diffusion model that creates (novel) protein sequences using protein language model (PLM) hidden representations. Unlike other approaches that use discrete diffusion or step-by-step generation, DiMA explores continuous diffusion to make better sequences. It works well with different protein encoders (ESM-2, CHEAP, and SaProt to name a few) and performs strongly in tasks such as adding motifs, creating fold-specific sequences, and designing protein families. Experimental (in silico) results show that DiMA can make diverse, new, and structurally sound protein sequences that often beat existing methods. Claims And Evidence: The paper explore the proposed paradigm (DiMA) to generate protein sequences comparable to larger models, where the major claim is that DiMA demonstrates strong performance on multiple sequence-design based benchmarks, which i find is however not well supported by the experiment (in table 2, not yet outperform other baseline models). Methods And Evaluation Criteria: The proposed method, as an application of the well-established latent diffusion models, is straightforward and makes sense. In regard of methodology, there is no novel theory or method is proposed. The evaluation criteria basically make sense with authors’ justification but not following previous practice. Theoretical Claims: No theoretical claim is found in this application-oriented paper. Experimental Designs Or Analyses: The experiments are comprehensive by design, covering several useful ablation studies across different tasks for sequence generation. The extended results in the appendix also demonstrate the solidness of experiments. Personally speaking, the use of swissprot and AFDBv4-90 is not very typical for sequence-based protein language models. Supplementary Material: I carefully check the model details and the definition of metrics. I also roughly go through the extended experimental results (figure and tables) in the appendix. Relation To Broader Scientific Literature: This work is situated at the intersection of (latent) diffusion models, protein language models and the protein design. It involves engineering the foundational protein language models (related: ESM, DPLM, etc.) and use denoising diffusion models over the latent space. Essential References Not Discussed: No, but I recommend the authors to explicitly set a “related work” section in the main text rather than in the appendix, which is important for general readers to grasp a proper context of this paper. Other Strengths And Weaknesses: Strengths: - Clear problem and methodology - Comprehensive testing/benchmarking experiments and comparison Weaknesses: - Limited theory behind the proposed method, making is a straightforward application practice of well-established method - Not comprehensive discussion of potential limitations or failure cases - Not strong performance compared to existing methods which may weaken the contribution of this work Other Comments Or Suggestions: - Line 187-202. the long list of bullet points is not reader-friendly. Please consider re-organize it. - A discussion on how to scale up DiMA (maybe by combining it with structural models like AlphaFold) would be valuable. Questions For Authors: - How does DiMA compare in speed (training/inference) to discrete diffusion models such as ESM3, DPLM? - Since the sequence representation can also share some merits with the structure prediction models (such as Alphafold2) such that people repurpose the PLM to folding (eg., ESMFold), can you train or fine-tune DIMA with structure-based tasks and data? i.e. explicitly incorporating structure information into account. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review and constructive comments. We appreciate your recognition of our work's comprehensive experimental design and clear methodology. > **W1.** Limited theory behind the proposed method, making is a straightforward application practice of well-established method While latent diffusion models have been established in other domains, our work demonstrates that naive application of continuous diffusion to protein sequences yields poor results. Our ablation studies in Table 1 show that standard diffusion implementations struggle with protein generation, achieving only ~60 pLDDT compared to DiMA's 83.3 pLDDT. We demonstrate that domain-specific engineering is essential for effective latent diffusion models. The challenge of applying diffusion to proteins is further evidenced by existing approaches requiring custom modifications. For example, discrete model DPLM [1] requires specialized initialization and sampling strategies [1, Appendix D.1], MultiFlow requires distillation steps, and so on. Our work contributes by establishing a principled framework for latent diffusion that generalizes across different representation spaces (from 8M to 3B parameter sequence-based and multimodal encoders) and tasks with a single architecture. As shown in our fold-conditioning experiments, this enables DiMA to achieve stronger performance than specialized structure-generation models like RFDiffusion (TM-score 0.93 vs 0.48). > **W2.** Not comprehensive discussion of potential limitations or failure cases Thank you for this suggestion. In our revision, we will add a dedicated discussion section addressing current challenges and future opportunities. Structure generation in our approach relies on heavyweight folding models like ESMFold. We see significant potential for enhancing performance through joint training of the diffusion model with lightweight structure prediction components. This approach could both improve computational efficiency and structural awareness of the model. > **W3.** Not strong performance compared to existing methods which may weaken the contribution of this work As shown in Figure 2, DiMA, MultiFlow, RFDiffusion, and DPLM each demonstrate Pareto-optimal performance on the quality-diversity tradeoff, with each dominating in different aspects. Considering model sizes, DiMA shows remarkable performance, using two orders of magnitude fewer parameters than e.g. DPLM. Table 8 demonstrates that only DiMA and MultiFlow achieve balanced metrics on this tradeoff, with the notable advantage that DiMA trains exclusively on sequences while MultiFlow requires both sequence and structure data, making DiMA more scalable due to its independence from limited 3D structural data. > **Q1.** How does DiMA compare in speed (training/inference) to discrete diffusion models such as ESM3, DPLM? We analyzed inference speed by generating 100 sequences of length 250 with 5 repetitions (for details see Figure 4 at https://tinyurl.com/icml25re). DiMA-35M is ~10x faster than DPLM-150M, which is slowed by using a larger model at each generation step. Structure models are notoriously computationally demanding. We have not measured training speed, but we expect DiMA to be faster as it requires only forward encoder passes, whereas discrete models need backward encoder passes as well. > **Q2.** Can you train or fine-tune DIMA with structure-based tasks and data? Yes, DiMA incorporates structural information in two ways: 1. We use local structural tokens for each amino acid passed with the sequence into the SaProt encoder. This allows explicit use of 3D structure in the scaffolding task, achieving strong results (details in section 3.6.1, appendix E.8, and Figure 3). 2. We employ DiMA with the CHEAP encoder based on ESMFold, enabling dual decoding (sequence and structure) from sequence alone. This approach outperforms RFDiffusion in fold-specific generation using explicit structure guidance (details in sections 3.6.3 and appendix E.9, and Figure 4). > **C1.** Line 187-202. the long list of bullet points is not reader-friendly. Please consider re-organize it. Thank you for this feedback. We will reorganize this section in the revision to improve readability. > **C2.** A discussion on how to scale up DiMA (maybe by combining it with structural models like AlphaFold) would be valuable. Thank you for this suggestion. We are currently exploring the integration of structural generation models with our latent diffusion approach for co-generation. In the revision, we will add a discussion section highlighting future directions and the potential benefits of combining DiMA with structural models. --- We thank you for your thoughtful feedback and suggestions. We would be grateful if you would consider raising your score, in case we have addressed your concerns. Please let us know if any aspects still need clarification. [1] https://arxiv.org/abs/2402.18567 --- Rebuttal Comment 1.1: Comment: Thank the authors for their response, which has basically addressed my concerns. I look forward to seeing DIMA as a better-designed “principled framework” as you put, a framework can be readily and easily applied for any PLMs, which can be very valuable for the whole community. I hope the authors can benefit from my reviewing comments on improving the paper quality. I decide to raise my score to 3.
Summary: The paper introduces DiMA, a latent diffusion approach for protein sequence generation leveraging pre-trained embeddings. The authors consider sequence-only, structural, and sequence-structure joint embeddings. DiMA produces novel and high pLDDT samples. Conditional generation tasks based on protein family, motif scaffolding, and infilling are shown. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: The paper is contextualized well within the literature, both with respect to different pre-trained embeddings and generative sequence models. Essential References Not Discussed: Concurrent work that the authors will be interested in: Lu, A. X., Yan, W., Robinson, S. A., Yang, K. K., Gligorijevic, V., Cho, K., ... & Frey, N. (2024). Generating All-Atom Protein Structure from Sequence-Only Training Data. bioRxiv, 2024-12. Other Strengths And Weaknesses: Latent diffusion over pre-trained embeddings is an interesting and timely idea. The DiMA evaluation is thorough, as are the benchmarks and baselines. Can the authors comment on the 4JHW and 5YUI PDB IDs as failure modes? What causes DiMA and other methods to struggle with these particular cases? It would be more informative to include a graphic overview of the model and approach as Fig 1, rather than the noise schedule. Distance metrics like FD-seq and other metrics should be introduced and defined in the main text. Other Comments Or Suggestions: See above Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review and the valuable suggestions. We appreciate your recognition of DiMA's thorough evaluation and benchmarking approach. Below, we address each point raised. > Concurrent work that the authors will be interested in. Regarding the work by Lu et al. (2024), we became aware of PLAID and evaluated it after our submission, just before receiving the reviews. We evaluated PLAID-100M (https://github.com/amyxlu/plaid) using the same protocol, we apply to all the models in our work. Our analysis shows that DiMA substantially outperforms PLAID, especially in terms of protein quality: | Model | pLDDT ↑ | ProGen-ppl ↓ | ESM2-pppl ↓ | scPerplexity ↓ | Rep | CD₀.₅ | CD₀.₉₅ | | --- | --- | --- | --- | --- | --- | --- | --- | | PLAID-100M | 53.48 | 14.982 | 13.46 | 2.294 | 0.0007 | 1.0 | 1.0 | | DiMA-35M | 83.4 | 9.00 | 5.80 | 1.78 | 0.010 | 0.969 | 1.000 | We will include these results in the final manuscript, specifically we will update Section 3.5 ('Comparison with Large Pretrained Models') and Figure 2. > Can the authors comment on the 4JHW and 5YUI PDB IDs as failure modes? What causes DiMA and other methods to struggle with these particular cases? These cases highlight limitations in the current benchmark rather than specific model shortcomings. Our analysis reveals that even when using reference sequences, the resulting structures after ESMFold prediction fail to meet the benchmark success criteria. For 4JHW, the reference sequence yields motif RMSD exceeding 6.0Å with pLDDT around 30, far below the thresholds for success (RMSD ≤ 1.0Å, pLDDT ≥ 70). Similarly, 5YUI produces motif RMSD above 3.0Å. The detailed results of our analysis on these challenging cases are presented in Table 1 at [https://tinyurl.com/icml25re]. Recent work [1] has recognized these benchmark limitations and is developing improved evaluation protocols, specifically addressing them in Table 4 of their preprint. >It would be more informative to include a graphic overview of the model and approach as Fig 1, rather than the noise schedule. We appreciate your suggestion about including a graphic overview of our model approach. In the final version, we are commited to add a figure to illustrate DiMA's architecture and workflow. >Distance metrics like FD-seq and other metrics should be introduced and defined in the main text. We will also ensure that distance metrics like FD-seq and other evaluation metrics are properly introduced and defined in the main text to improve clarity. --- Thank you again for your constructive feedback, which will help us improve the final manuscript. [1] https://arxiv.org/pdf/2502.12479 --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. They have addressed the points raised in my review and I will raise my score.
Summary: The authors have proposed a continuous diffusion framework, named DiMA. DiMA consists of three modules, i.e. 1) frozen pLMs like ESM2 to extract latent embedding for a given protein sequence, 2) a continuous diffusion module to generate latent embedding for noise and 3) a decoder that maps the latent embeddings to amino acid sequences. It can be seen as a kind of knowledge distillation that injects information from pre-trained large pLMs, like ESM or SaProt, into DiMA. Through comprehensive and systematic experiments, the authors claim that the performance of DiMA is consistent with (or better than) specialized baselines in different scenarios like unconditional generation and conditional generation, such as motif scaffolding and fold-conditioned generation. Claims And Evidence: The claims are supported by clear evidence. The authors claimed the performance of DiMA matchs or exceeds the baselines. Table 2 compares DiMA with other generative baselines in unconditional scenario, Figure 3 in motif scaffolding-condtioned, Table 10 in family-specific conditioned and Table 14 in fold-conditioned scenario. Besides, the authors have conducted a detailed ablation study to identify key components in Table 1 Methods And Evaluation Criteria: The proposed continuous diffusion model for generating amino acid sequences is logical. The evaluation criteria, such as pLDDT and TM-score for structural quality and perplexity for sequence quality, are appropriate. These criteria comprehensively address quality, diversity, and novelty in generation tasks. Theoretical Claims: The model illustration in Session 2 and Appendix C.1 includes details on Noise Schedule Optimization and Self-Conditioning. Experimental Designs Or Analyses: The overall experimental designs are completed and comprehensive. For benchmark, DiMA are compared with 5 groups of baselines. An ablation study is conducted to find key contributed module in DiMA. For illustrate the performance of DiMA, the authors have conducted extensive experiments both on unconditional generation and conditional generation including motif-scaffolding, family-specific and fold-conditioned. Supplementary Material: Supplementary material is very comprehensive, including the detail of model architecture, explanation of each evaluation metric, and additional results for ablation study and benchmarks on conditioned scenarios. Relation To Broader Scientific Literature: Essential prior works like EvoDiff(Alamdarietal.,2023), DPLM(Wangetal.,2024), Multiflow(Campbelletal.,2024), are appropriately cited and compared with proposed method as baseline methods. Essential References Not Discussed: The discussion of related works is comprehensive. Other Strengths And Weaknesses: Strengths: 1. The experiment designs are well-done, demonstrating the ability of DiMA in different generation scenarios. 2. The manuscript is well-written and easy to read. Weaknesses: 1. The paper is not so technical novel. Diffusion is widely applied for protein sequence in the community, as is the pretrained pLMs. Techniques like noise schedule optimization and long-skip connection are mainly engineering improvements. Other Comments Or Suggestions: 1. The authors should explain the meaning of ‘sd-10” in Figure 1. 2. On line 208 of page 4, the authors have mentioned “padding omitting” decreases the performance, but the detail influence is not shown in Table 1. 3. On line 236 of page 5, the author asserts that DPLM generates longer sequences compared to DiMA. What Length Determination strategy (Appendix C.3) is employed for DiMA inference here? Is the same strategy applied to DPLM to ensure a fair comparison? Questions For Authors: 1. The authors introduced DiMA, a continuous diffusion method for protein sequence generation. However, they did not adequately demonstrate the necessity of using continuous diffusion compared to the discrete diffusion approach employed in DPLM and multiflow, which appears more intuitive for sequence generation. 2. Do DiMA have a sequence length preference? Say we separate protein sequences by their length into different bins, like [0-50],[50-100],…, what’s the performance of DiMA in each group, in terms of quality, diversity and novelty? 3. As shown in Table 3, DiMA archives consistent quality (i.e. pLDDT) comparing with other baselines while sacrificing diversity and novelty (Table 3). Is there any approach to elevating diversity and novelty without reducing the size of ESM encoder (so that the generation quality may keep at the same level) ? 4. Do you have a plan to release code? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your thoughtful and detailed feedback. We appreciate the time you have taken to review our work and your positive comments about our paper. We aim to address your concerns and questions below. > **W1.** On technical novelty. While diffusion models are not new, our work shows that naive application of continuous diffusion to proteins yields poor results (~60 vs 83.3 pLDDT, Table 1). Our systematic ablations quantify the impact of each component. As reviewer *Y6i7* notes, we contribute by 'exploring training and inference strategies instead of directly copying parameterization from image domain'. This enables DiMA to achieve strong performance across diverse representation spaces (from 8M to 3B parameter sequence-based and multimodal encoders) and multiple tasks with a single architecture - outperforming specialized models like RFDiffusion in fold conditioning (TM-score 0.93 vs 0.48). >**C1.** On the meaning of "sd-10". Thank you for noting the unclear meaning of "sd-10" in Figure 1. This refers to our noise schedule based on the formula $\alpha_t = \frac{1}{1 + d^2tan^2(\frac{\pi t}{2})}$, where d=10 determines the rate of information decay. We will rename this to "tan-10" in the revised manuscript for better clarity. >**C2.** On padding omitting. The "w/o length sampling" entry in Table 1 corresponds to the model trained with padding and without length sampling. We will clarify this connection in the revision. >**C3.** On sequence length determination. For fair comparison across all methods, we apply the same length sampling strategy during inference: sequence lengths are sampled from the empirical distribution of the training set. The difference noted on line 236 refers to protein domains (distinct structural/functional units within proteins), not overall sequence length. While we control overall sequence length distribution across methods, domain length distributions emerge naturally from each model's generation process. Our analysis shows DiMA generates domain length distributions closely matching natural proteins, whereas DPLM skews toward longer domains (Figure 18C). >**Q1.** On the necessity of continuous vs. discrete diffusion. Thank you for this important question. While discrete diffusion may seem more intuitive for sequences, continuous representations have proven highly effective in protein domains including representation learning (ESM, ProtT5), structure prediction (AlphaFold, ESMFold), and backbone generation. Recent work (CHEAP [1]) confirms continuous representations capture richer protein features. Continuous diffusion offers several advantages: 1. Direct application of established score-based techniques like classifier and classifier-free guidance without requiring discrete approximations 2. Seamless integration with multimodal representations (CHEAP, SaProt) that jointly capture sequence and structure 3. More stable and efficient training compared to discrete spaces 4. Fine-grained optimization of diffusion parameters, as shown in our ablation studies Our experiments demonstrate that this approach not only achieves strong performance but also enables structure-aware generation and fold-conditioning that are challenging in purely discrete frameworks. We believe both approaches have merits, and our work contributes to understanding how continuous diffusion can be effectively applied to protein design. We will expand this discussion in our final manuscript. >**Q2.** On sequence length preference. Thank you for this question. We conducted additional experiments and our results show that DiMA maintains consistent performance across all sequences length ranges, with quality metrics closely tracking the natural protein distribution (figure 1 at https://tinyurl.com/icml25re). DiMA achieves more stable diversity than DPLM-3B, which shows a drop in diversity for longer sequences. >**Q3.** On control over diversity and novelty. Thank you for raising this point on balancing quality, diversity, and novelty. Our framework offers two knobs to control this trade-off without changing the encoder architecture (figures 2, 3 at https://tinyurl.com/icml25re): 1. The number of sampling steps. Reducing the number of steps increases diversity and novelty and maintains reasonable quality. 2. The self-conditioning rate parameter (w), which controls how much the model relies on its previous predictions during sampling. Lower values of w (0.6-0.8) yield higher diversity and novelty with a modest quality trade-off. >**Q4.** On the code release. We are currently preparing our codebase for public release and plan to make it available upon publication. --- We are grateful for your detailed feedback and will incorporate your suggestions in the final version of our manuscript. Please let us know if any questions still need clarification. [1] https://www.biorxiv.org/content/10.1101/2024.08.06.606920v2 --- Rebuttal Comment 1.1: Comment: Thank the authors for their response, which has addressed my concern. I will raise the score to 4.
null
null
null
null
null
null
Ex-VAD: Explainable Fine-grained Video Anomaly Detection Based on Visual-Language Models
Accept (poster)
Summary: This paper proposed an explainable approach named Ex-VAD for fine-grained video anomaly detection, which consists of three modules, Anomaly Explanation Generation Module (AEGM), Multimodal Anomaly Detection Module (MADM), and Label Augment and Alignment Module (LAAM). AEGM tries to extract and refine frame-level captions using VLMs, and then generates video-level anomaly explanations with an LLM. MADM encodes the text and extracts temporal and spatial features to detect coarse-grained anomalies. LAAM expands and aligns anomaly category labels with multimodal features to achieve fine-grained anomaly detection. Experimental results verify the effectiveness of the proposed method. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical claims included in the paper Experimental Designs Or Analyses: The designs of Anomaly Explanation Generation Module (AEGM), Multimodal Anomaly Detection Module (MADM), and Label Augment and Alignment Module (LAAM) are reasonable. Supplementary Material: No Supplementary Material provided Relation To Broader Scientific Literature: results Essential References Not Discussed: [1] Holmes-vad: Towards unbiased and explainable video anomaly detection via multi-modal llm. ArXiv2024. Other Strengths And Weaknesses: Strengths: 1. An intuitive and motivative method is proposed for fine-grained video anomaly detection and can conduct video-level anomaly explanations. 2. This paper is well written and easy to follow. 3. Experimental results verify the effectiveness of the proposed method. Weaknesses: 1. This paper incorporates multiple existing techniques, such as VLM and LLM, to generate explainable text and uses them to improve fine-grained video anomaly detection performance. However, this is an intuitive method, and more model designs or insightful ideas should be included. Additionally, the long pipeline will affect the inference speed. Authors should discuss the inference speed. 2. Multiple large and powerful models such as BLIP-2 are used, which makes the comparison with existing methods unfair. Additionally, why can’t use the intermediate features of BLIP-2 to improve the performance? For example, we can concatenate the intermediate features of BLIP-2 and visual features of videos or treat the intermediate features of BLIP-2 as visual features. 3. Explainable texts are not showed in the main text. Some cases should be showed to indicate the behaviors of the proposed method. 4. Existing explainable video anomaly detection methods such as [1] are not discussed and compared. [1] Holmes-vad: Towards unbiased and explainable video anomaly detection via multi-modal llm. ArXiv2024. Other Comments Or Suggestions: Please refer to the Weaknesses for more details. Questions For Authors: Please refer to the Weaknesses for more details. Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable comments. We will add these valuable comments to the revised manuscript. **R1: Evaluation of Inference Time.** Thanks for your suggestion. We would like to point out that Table 7 in our main paper provides a comparison of relevant computational metrics, including Trainable Parameters, Inference Time, and Multiply-Add Operations (MACs). Our approach achieves similar inference speeds as previous methods while improving interpretability and fine-grained anomaly detection through VLM and LLM. |Method|Trainable Params|Inference Time|MACs|UCF-Crime(AUC%)| | :- | :-: | :-: | :-: | :-: | |RTFM|24\.72M|**8.28ms**|126\.59G|85\.66| |DMU|**6.49M**|16\.60ms|_21\.00G_|86\.75| |CLIP-TSA|16\.41M|18\.33ms|102\.63G|87\.58| |VADCLIP|35\.17M|22\.30ms|29\.17G|_88\.02_| |ExVAD|_9\.97M_|_15\.37ms_|**12.04G**|**88.29**| **R2: Reasons to choose BLIP2.** The use of VLMs and LLMs is an essential part of the solution to the lack of anomaly classification and anomaly explanations in existing studies. To reduce its required computational cost and resources, we tried various ways to generate captions during the experiment, such as BILP-2, LLAMA3.2, cogVLM, etc. We found that the optimal choice that combines both efficiency and performance is BLIP-2, which generates video captions with lightweight deployment and less time. Regarding BLIP-2's intermediate features, we tried to integrate them into the model, e.g. by splicing them with video visual features. Experiments show that this method has an accuracy of 88.09% on the UCF-Crime dataset, while the anomaly explanation generated by LLM can achieve 88.29%. Therefore, we finally chose the explanation text generated by LLM which could describe the cause of the anomaly in addition to improving the accuracy by 0.2%. **R3: Explainable texts are not shown in the main text.** Because of the 8-page limit, necessary experiments (such as accuracy of fine-grained and coarse-grained anomaly detection, ablation experiments, parameter selection, model complexity, and inference time) have taken up a major part of the paper, so we have placed the explainable text in the appendix section. An example showing the logic and results of anomaly text generation is given in Figure 3 in the main text. Additionally, we'll add more explainable text in camera-ready paper. **R4:** **More methods to compare.** Thanks for your suggestion. We have compared our method with existing explainable video anomaly detection methods such as LAVAD[1], VERA[2], and VADor[3], and we will compare and discuss the suggested method Holmes-vad[4] in the camera-ready paper. Different from Homles-vad which focuses on the coarse-grained anomaly detection task (determining whether a video frame is abnormal), our method can achieve coarse-grained anomaly detection and fine-grained anomaly classification (identify the abnormal category). As Homles-vad requires more supervised information, additional introduction of WSVAD [5] to generate pseudo-labels as well as fine-tuning LLM, its coarse-grained anomaly detection results outperform our method. Three main reasons are as follows: 1. Holmes-vad incorporates more supervisory information, performs single-frame temporal annotation in constructing the dataset, and generates interpretable textual descriptions based on it, whereas our method stays consistent with the traditional video anomaly detection methods without any additional annotation. 1. Holmes-vad is based on the existing WSVAD [5] method which first performs prediction to get the pseudo-labels and then selects the frames with anomaly scores higher than the threshold, which focuses on the keyframes as compared to uniform sampling. 1. Holmes-vad needs to fine-tune the multimodal large language model and requires more computational resources for training. Experiments demonstrate that our method has only 9.97M training parameters and 15.37ms inference speed. Our method has a better balance between computational resources and effectiveness. [1] LAVAD: Harnessing large language models for training-free video anomaly detection. CVPR2024 [2] Vera: Explainable video anomaly detection via verbalized learning of vision-language models. ArXiv2024. [3]VADor: Video anomaly detection and explanation via large language models. ArXiv2024. [4] Holmes-vad: Towards unbiased and explainable video anomaly detection via multi-modal llm. ArXiv2024. [5] UR-DMU: Dual memory units with uncertainty regulation for weakly supervised video anomaly detection. AAAI 2023. --- Rebuttal Comment 1.1: Comment: Thanks the authors for the response. After reading the rebuttal, I still have concerns about the inference time. This paper incorporates multiple existing techniques, such as VLM and LLM. But the latency reported in Table 7 is only 15.37 ms and the MACs is 12.04G, which is unconvincing. Since large VLM and LLM are really time-consuming, and can the authors provide detialed inference costs of each component? In addition, in Table 4, incorporating captions to visual features can't improve the performance (86.76 VS. 86.33), can the authors give some deep explainations about this phenomenon? --- Reply to Comment 1.1.1: Comment: **R1:Inference Time.** Thanks for your suggestions. We need to clarify how the results in Table 7 are tested. We firstly pre-extract the caption with VLM and then generate explainable text by LLM. During the experimental process, training and inference are performed by loading the pre-processed data. Existing methods for testing inference time are also tested by loading pre-processed data, so the inference time in Table 7 refers to the inference time for the model to perform anomaly detection, excluding the time for extracting captions and generating the explainable text. For a complete display of the model's inference time, the following table shows the detailed inference costs of each component. |Component|Caption|Explainable Text|Anomaly Detection|Total| | :- | :-: | :-: | :-: | :-: | |Inference Time|0.0003ms|3.84ms|15.37ms|19.21ms| To process one video frame, the inference costs of extracting caption, generating explainable text, and performing anomaly detection are 0.0003ms, 3.84ms, and 15.37ms, respectively. The total inference cost of our model is 19.21ms. **In other words, the inference speed of our model is 52.06 FPS, indicating that it can meet the requirements of the practical application of real-time inference.** To reduce the inference cost caused by VLM and LLM, we adopt the following strategies: 1) we extract one caption from multiple frames due to the high similarity between neighboring frames; 2) we use video-level explainable text (i.e., one video generates one explainable text); 3) the inference time is obtained by calculating the average time to process each frame. **R2: Incorporating captions into visual features can't improve the performance.** During our experiments, we found that the accuracy decreased after extracting each frame of caption and simply fusing it with visuals. The reason is that **unprocessed frame-level captions have a large amount of noise, incomplete caption content, lack of description of global context and scene dynamics, and inability to accurately respond to visual content.** To capture the global context and dynamic description of the scene, these captions were further cleaned via LLM to remove incomplete captions and generate explainable text.
Summary: This paper introduces Ex-VAD, an explainable fine-grained video anomaly detection method based on visual-language models and large language models. By integrating modules for anomaly explanation generation, multi-modal feature fusion, and label augmentation and alignment, Ex-VAD achieves both fine-grained classification and explainability. Experimental results show that Ex-VAD significantly outperforms existing methods on the UCF-Crime and XD-Violence datasets, demonstrating its potential in the field of video anomaly detection. Claims And Evidence: The authors claim that Ex-VAD not only detects abnormal behaviors but also generates detailed anomaly explanations, which is valuable for applications requiring precise decision-making. The experimental results support this claim, especially in the fine-grained detection task, where Ex-VAD achieves significantly higher mAP on both datasets compared to other methods. However, it is suggested that the authors further quantify the quality of the explanations to more comprehensively demonstrate their interpretability advantages. Methods And Evaluation Criteria: The proposed Ex-VAD method and its evaluation criteria are well-suited for the problem of video anomaly detection. The use of mAP (mean Average Precision) and AUC (Area Under the Curve) as primary metrics aligns with established practices in the field, ensuring that the method is assessed using standard and widely-accepted benchmarks. However, it would be beneficial if the authors could provide a more detailed discussion on the validation of the explainability of the generated text. Theoretical Claims: I have reviewed the theoretical claims presented in the paper. The paper demonstrates the effectiveness of Ex-VAD in the direction of fine-grained anomaly detection through experimental results. Experimental Designs Or Analyses: I have reviewed the experimental designs and analyses proposed in the paper. The methodology is sound, with appropriate use of the UCF-Crime and XD-Violence datasets, which are relevant and widely recognized within the field. However, I suggest that the authors consider adding additional experiments to further validate the robustness of Ex-VAD under various conditions. Supplementary Material: I have reviewed the supplementary materials. The supplementary materials provide insights into the model's detection across different anomaly categories as well as its descriptions of anomalous phenomena. Relation To Broader Scientific Literature: The key contributions of this paper are closely related to the broader scientific literature on fine-grained video anomaly detection and anomaly explanation. The paper builds upon previous work by enhancing the capability of fine-grained anomaly detection while also leveraging large language models to generate explanations for anomalies, which is an area of growing research interest. It would be beneficial if the authors could provide a more detailed discussion on how Ex-VAD compares and contrasts with other recent advancements in the field. Essential References Not Discussed: While the paper does an adequate job of citing relevant literature, there may be additional recent works that are essential for a comprehensive understanding of the context of the contributions. For instance, any recent studies on the integration of visual and textual information in AI systems or advancements in explainable AI could provide further context. It would be beneficial if the authors could include these in their discussion to give a more complete picture of the research landscape. Other Strengths And Weaknesses: ### Strengths 1. Ex-VAD not only detects abnormal behaviors but also generates detailed explanations, which is valuable for applications requiring precise decision-making, such as surveillance. This integration of fine-grained classification and explainability is an innovation in the field. 2. The experimental results on two benchmark datasets demonstrate Ex-VAD's superior performance in both fine-grained and coarse-grained detection tasks, proving its competitiveness in video anomaly detection. ### Weaknesses 1. The LAAM module enhances detection performance by expanding label semantics, but this approach may be sensitive to the initial quality of the labels. If the labels are not accurately defined or descriptive enough, performance may degrade. The authors could explore how to maintain robustness when label quality is poor. 2. While Ex-VAD generates anomaly explanations, the paper lacks quantitative assessment of the quality of these explanations. For example, user studies or comparisons with existing explainable methods could be introduced to more comprehensively demonstrate its explainability advantages. Other Comments Or Suggestions: 1. It would be helpful if the authors could provide a more detailed discussion on the limitations of Ex-VAD, including scenarios where the model might not perform optimally. 2. I recommend including a more comprehensive comparison with other recent state-of-the-art methods in video anomaly detection. This would provide a clearer picture of Ex-VAD's advantages and potential areas for improvement. Questions For Authors: 1. Could you elaborate on how the quality and usefulness of the generated explanations were quantitatively assessed? 2. Are there any plans to optimize the computational efficiency of Ex-VAD for real-time applications, and if so, what are the expected improvements in terms of speed and resource usage?" 3. What are the next steps in your research after this study? Are there any plans to address the limitations you've identified or to expand the capabilities of Ex-VAD in new directions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable comments. We will add these valuable comments to the revised manuscript. **R1: Robustness of the label.** We use the SOTA large language model GPT to generate M phrases for label expansion and select the top-k among them as the final labels. To ensure the generated labels remain relatively stable, k is set to be much smaller than M. Experiments demonstrate that the best performance is achieved when k=4. This approach ensures a certain level of robustness in the quality of the generated labels. **R2: Analysis of Anomaly Explanation.** Currently, evaluation methods for video-text tasks are more common in video understanding. Typical quantitative analyses compare the similarity between generated text and reference text using metrics such as BLEU, CIDEr, and METEOR. Additionally, qualitative analysis is conducted by comparing texts generated by different models, where higher relevance to the video indicates better quality. In the appendix, we provide a visualization analysis of our model, which includes interpretable text that, to some extent, reflects the model's understanding of the video. In the final manuscript, we will further supplement this aspect. **R3: More comprehensive comparison with other recent SOTA methods.** You can see the reply to Reviewer VnC3's “**R4: More methods to compare.**” for a more detailed answer. **R4: Quantitative analysis of explainable texts.** Quantitative analysis of explainable texts is our next step. Our next plan is to further standardize the process of anomalous explanatory text generation, and its qualitative analysis. Specifically, 1. optimize the text generation strategy: make the generated anomaly explanations more precise and readable by adjusting the prompt engineering or introducing more reasonable templates. 2. enhance the standardization of the qualitative analysis: introduce the qualitative assessment criteria of the text, such as BLEU, CIDEr, METEOR, etc. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' reply. My concerns have been addressed, and I will maintain my current score.
Summary: This paper proposes Ex-VAD, an explainable fine-grained video anomaly detection method that integrates visual-language models (VLMs) and large language models (LLMs). The approach consists of three main modules: the Anomaly Explanation Generation Module (AEGM), the Multi-modal Anomaly Detection Module (MADM), and the Label Augment and Alignment Module (LAAM). The AEGM uses VLMs to generate frame-level captions and leverages LLMs to produce video-level anomaly explanations. The MADM combines text and visual features for coarse-grained anomaly detection, while the LAAM enhances fine-grained classification by expanding and aligning labels. Experimental results demonstrate that Ex-VAD significantly outperforms existing methods on the UCF-Crime and XD-Violence datasets. ## update after rebuttal Claims And Evidence: The authors claim that Ex-VAD outperforms existing methods in both fine-grained and coarse-grained video anomaly detection tasks. The experimental results support this claim, especially in the fine-grained detection task, where Ex-VAD achieves significantly higher mean Average Precision (mAP) compared to other methods on both datasets. Methods And Evaluation Criteria: The design of Ex-VAD is reasonable, integrating multi-modal feature fusion and label augmentation alignment, making it suitable for video anomaly detection tasks. The primary evaluation metrics, including mAP (mean Average Precision) and AUC (Area Under Curve), are standard practices in the field. Theoretical Claims: I have carefully reviewed the theoretical claims made in the paper. The authors argue that introducing the text modality can further enhance the capability of anomaly detection, especially for fine-grained anomaly detection. This reasoning is sound and aligns well with the goals of the paper, and the experimental results also validate this conclusion. Experimental Designs Or Analyses: The experimental design is reasonable, as it compares both coarse-grained and fine-grained metrics. Although the coarse-grained metric on the UCF dataset is only 0.21% higher than the SOTA method, there is a significant improvement in the fine-grained metric, such as a 3.67% increase on the UCF dataset. These results demonstrate the effectiveness of the proposed method. Supplementary Material: I have reviewed the appendix part. It contains a description of the qualitative analysis of Ex-VAD. The images demonstrate that Ex-VAD effectively detects unusual anomaly categories in videos, describes anomalous phenomena, and accurately differentiates between normal and abnormal clips within anomalous videos. Relation To Broader Scientific Literature: This method is closely related to the previous VADCLIP approach but differs in that it utilizes LLMs to generate anomaly explanations for videos as textual information rather than relying solely on visual information. Compared to previous methods, this textual information adds more detail to the videos, complementing the visual information and significantly enhancing the capability for fine-grained anomaly detection. Essential References Not Discussed: In addition to the weakly supervised methods and training-free methods mentioned, the authors have not taken into account some recently published approaches, such as those based on LLMs. Discussing these methods would help to clarify whether the explainability of Ex-VAD is effective. Other Strengths And Weaknesses: Strengths: + Ex-VAD combines fine-grained classification with anomaly explanations, filling the gap in explainability of existing video anomaly detection methods. This is particularly important for applications requiring precise responses, such as surveillance and healthcare monitoring. + By integrating text information generated by the AEGM with visual features, Ex-VAD leverages the strengths of both modalities to significantly enhance anomaly detection performance. + The extensive experiments on two benchmark datasets show that Ex-VAD outperforms existing methods in both fine-grained and coarse-grained detection tasks, demonstrating its effectiveness and superiority. Weaknesses: + Although Ex-VAD performs well in terms of computational efficiency, its reliance on multiple complex modules (e.g., VLMs and LLMs) may lead to higher resource consumption in practical deployment. The authors could further explore model lightweighting or optimization strategies. + Ex-VAD relies heavily on pre-trained VLMs and LLMs, which may limit its applicability to specific domains or custom datasets. You may investigate how to fine-tune the model with limited data to improve its performance in specific tasks. Other Comments Or Suggestions: If possible, please include comparisons with some of the latest methods from 2025. Questions For Authors: + Could you elaborate on how you specifically selected and optimized the strategy for multi-modal feature fusion during the training process? + Are there plans to extend the Ex-VAD method to other types of datasets or domains in future work, such as anomaly detection and early warning in autonomous driving? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable comments. We will add these valuable comments to the revised manuscript. **R1: Resource consumption.** We appreciate your concern regarding the resource consumption of Ex-VAD due to the integration of VLMs and LLMs. We also recognize that due to the integration of multiple complex modules such as VLM and LLM, it may impose high resource requirements when deployed. Therefore, in the experimental session, we compared the training parameters and Multiply-Add Operations(MACs). Table 7 demonstrates that our method achieves a well-balanced trade-off between model complexity and size, optimizing both performance and resource usage effectively. |Method|Trainable Params|Inference Time|MACs|UCF-Crime(AUC%)| | :- | :-: | :-: | :-: | :-: | |RTFM|24\.72M|**8.28ms**|126\.59G|_85\.66_| |DMU|**6.49M**|16\.60ms|_21\.00G_|86\.75| |CLIP-TSA|16\.41M|18\.33ms|102\.63G|87\.58| |VADCLIP|35\.17M|22\.30ms|29\.17G|88\.02| |ExVAD|_9\.97M_|_15\.37ms_|**12.04G**|**88.29**| **R2: Fine-tune the model.** We appreciate your concern and suggestion about fine-tuning the model. At the beginning of our experiments, we thought that there were two ways to obtain anomaly explanations: the first way is to fine-tune the LLM to achieve the interpretability of anomaly videos, and the second way is to invoke the knowledge of the LLM by setting the cue words to obtain the desired anomaly explanations. Through the experiments, we found that the computational resources, model complexity, and reasoning speed required for fine-tuning LLM are much higher than that of the method of setting cue words when there is not much difference in accuracy. Therefore, ultimately, our approach is to efficiently utilize the general knowledge of LLM by setting cue words to achieve anomaly interpretation and improve detection accuracy. **R3: Multi-modal feature fusion during the training process.** During the training process, we optimize and select the multimodal fusion strategy in the following two steps. Firstly, we refer to the fusion strategies commonly used by previous studies, such as contact, cross-modal attention, and addition, etc. Secondly, we try the above methods in the experimental process and select the optimal fusion method by comparing the accuracy. **R4: Future work.** Thanks for your attention, our next plan is to further standardize the process of anomalous explanatory text generation, and its qualitative analysis. Specifically, 1. optimize the text generation strategy: make the generated anomaly explanations more precise and readable by adjusting the prompt engineering or introducing more reasonable templates. 2. enhance the standardization of the qualitative analysis: introduce the qualitative assessment criteria of the text, such as BLEU, CIDEr, METEOR, etc.
Summary: Paper proposes an explainable VAD approach which combines fine-grained classification with explanations. The approaches use pre-trained VLM and LLM to extract the relevant features. The approach employed 3 linear combination of 3 loss functions for the fine-grained classification of anomalous videos. A novel label-enhanced alignments method was used to optimize the feature fusion. Experiments on 2 popular VAD datasets show promising results against the SOTA methods. Claims And Evidence: The experimental results in Table 1 (UCF-Crime), Table 2 (XD-Violence) and Table 3 (Coarse grained, UFC-Crime, XD-Violence) shows that the proposed method outperform other SOTA methods. Methods And Evaluation Criteria: The methods and evaluation criteria are sensitive. However, there are other VAD datasets which should also be included in the experiments, e.g. ShanghaiTech [1] and UCSD-Ped2 [2] [1] Dan Xu, Rui Song, Xinyu Wu, Nannan Li, Wei Feng, and Huihuan Qian. Video anomaly detection based on a hierarchical activity discovery within spatio-temporal contexts, Neurocomputing [2] Wen Liu, Weixin Luo, Dongze Lian, and Shenghua Gao. Future frame prediction for anomaly detection–a new baseline. In Proceedings of the IEEE conference on computer vision and pattern recognition Theoretical Claims: No theoretical claims. Equations were read but not checked. Experimental Designs Or Analyses: No major issue on experimental designs and analysis. The exclusion of other benchmarks, like ShanghaiTech and UCSD-Ped2 should be explained to avoid suspicion of cherry-picking results. Supplementary Material: Not applicable. Relation To Broader Scientific Literature: Paper's approach is similar to other explanable AI using VLM. The application of fine-grained VLM is also not very novel. However, the experimental results are promising. Essential References Not Discussed: The earlier work in 2023 using VLM for VAD is not cited and discussed. Chen, W., Ma, K. T., Yew, Z. J., Hur, M., & Khoo, D. A. A. (2023). TEVAD: Improved video anomaly detection with captions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 5549-5559). Other Strengths And Weaknesses: Strengths 1. Paper's experimental results are strong for the 2 benchmarks experimental upon. 2. The use of pretrained VLM models for explainable VAD is somewhat novel. But there are some prior works. 3. Label Augment and Alignment Module is incrementally novel. Weaknesses 1. Not much technical contributions. Main novelty is the pipeline of modules and the Label Augment and Alignment module. 2. Experimental results while strong, are not particularly significant compare to SOTA. Other Comments Or Suggestions: Typos: Table 6: "Lable Augmnet" should be "Label Augment" Questions For Authors: Will the paper include the two benchmarks, ShanghaiTech and UCSD-Ped2 to show the generalizability of the proposed method? Ethical Review Concerns: Not applicable. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable comments. We will add these valuable comments to the revised manuscript. **R1:Novelty of the Proposed Pipeline.** We apologize for failing to highlight our contributions and novelty. Different from existing coarse-grained VAD, our method is unique in using the generated anomaly text for both anomaly explanation and fine-grained anomaly detection. Compared with traditional fine-grained anomaly detection, our accuracy has a large improvement, e.g., in the UCF dataset, the average mAP(AVG) is 3.67% higher compared with SOTA. Additionally, compared with fine-tuning a large language model (LLM) or a multimodal large language model (MLLM) for anomaly detection, our approach requires significantly fewer parameters (9.19M) and achieves a faster inference speed (15.37ms). This demonstrates a good trade-off between complexity and effectiveness. **R2:Reasons for not using ShanghaiTech and UCSD-Ped2.** The key idea of our method aims at the fine-grained detection and description of the causes of anomalies, and its advantage is to generate anomaly explanation text through VLMs and LLMs, which both describe the causes of video anomalies and greatly improve the accuracy of the fine-grained anomaly classification. Therefore, we select datasets that contain fine-grained anomaly detection in existing studies, such as UCF-Crime and XD-Violence, whereas the Shanghai and UCDF datasets have not been used by researchers for fine-grained anomaly detection, and thus cannot be compared with the superiority of our method. Considering your suggestion, we will add related experiments for the shanghaiTech and UCSD-Ped2 datasets. Due to time constraints, we only had time to add the results of the coarse-grained anomaly detection experiment on shanghaiTech, and will add the results on UCSD-Ped2 later. As shown in the below table, we performed coarse-grained anomaly detection on the ShanghaiTech dataset and compared it with other methods: |**Method**|Feature|AUC(%)| | - | - | - | |GAN-Anomaly(CVPR2019)|TSN|84\.4| |RTFM(ICCV2021)|I3D|97\.21| |CMRL(CVPR2023)|I3D|97\.60| |TEVAD(CVPR2023)|I3D|98\.10| |PE-MIL(CVPR2024)|I3D|**98.35**| |STPrompt(ACMMM2024)|CLIP|97\.81| |Ours|CLIP|_98\.23_| Although our method is not SOTA, it is only 0.12% less than the best result, 0.13% higher than TEVAD, and 0.42% higher than STPrompt, which also uses the clip to extract features. This experiment shows that our method is also applicable to the Shanghai dataset. **R3: Citation and discussion of TEVAD.** TEVAD makes full use of VLM to generate video captions as textual modalities and achieves improved accuracy in anomaly detection by fusing it with visual modalities. We will cite and discuss it in the final manuscript. Both TEVAD and our method utilize VLMs to generate captions, which extends the traditional single visual modality to visual and textual multimodal branching, improving anomaly detection. Our advantages over TEVAD are shown below. First, our proposed method not only determines whether a video frame is abnormal but also further identifies the abnormal category (Abuse, Arrset, etc.). Second, instead of just generating captions, the video anomalies are further leveraged to describe the video anomalies through LLMs. **R4: others.** Thank you for pointing out the spelling error, we will change Table 6: 'Lable Augmnet' to 'Label Augment' in the camera-ready paper.
null
null
null
null
null
null
On the Adversarial Robustness of Multi-Kernel Clustering
Accept (poster)
Summary: This paper examines the vulnerability of MKC methods to adversarial perturbations—an area that remains understudied. The authors introduce AdvMKC, a reinforcement learning framework that generates subtle perturbations to deceive MKC methods in black-box settings. Using proximal policy optimization and an innovative generator-clusterer architecture, AdvMKC stabilizes clustering outcomes while reducing computational overhead. Extensive experiments across seven datasets and eleven MKC methods validate the framework's effectiveness, robustness, and transferability. Claims And Evidence: Yes. The experimental results strongly support the main claims in the submission. The results are presented clearly and thoroughly, with a comprehensive set of datasets demonstrating the method's effectiveness. Methods And Evaluation Criteria: The proposed method leverages reinforcement learning—an approach particularly well-suited for generating adversarial perturbations. Theoretical Claims: The proofs are generally well-structured and the key assumptions are clearly stated. Experimental Designs Or Analyses: The experimental setup is generally appropriate, with a good selection of benchmarks and adversarial examples used to evaluate robustness. Supplementary Material: No supplementary materials were provided with this submission. Relation To Broader Scientific Literature: The paper makes significant contributions in the area of adversarial robustness in MKC, proposing a new reinforcement-learning-based adversarial attack framework (AdvMKC). Essential References Not Discussed: There are no essential references missing from the discussion. Other Strengths And Weaknesses: Strengths - This paper is pioneering in its examination of MKC methods' adversarial robustness under black-box settings. - This paper proposes two additional comparison methods to fully demonstrate the effectiveness of the proposed approach. - The motivation is clear and the paper is well-written and easy to follow. Weaknesses - In Table I, some cases show improved MKC method performance under adversarial conditions. The authors should clarify why this phenomenon occurs. - In Eq. (16), the authors introduce two hyperparameters to balance three loss terms. However, they do not conduct sensitivity analysis in the evaluation section to demonstrate how these parameters impact the attack performance. - Considering this is the first work to assess the adversarial robustness of MKC methods, the authors should release their source code to promote further research. Other Comments Or Suggestions: The legends in the experimental results figures are too small and should be enlarged. Questions For Authors: - In Table I, some cases show improved performance of the MKC method under adversarial conditions. Could the authors clarify why this phenomenon occurs? - In Eq. (16), the authors introduce two hyperparameters to balance three loss terms. Could the authors provide a sensitivity analysis in the evaluation section to demonstrate how these parameters impact the attack performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely appreciate Reviewer HijP’s thorough and constructive review. We provide point-by-point responses to the raised weaknesses as follows:** --- **W1:** In Table I, some cases show improved MKC method performance under adversarial conditions. The authors should clarify why this phenomenon occurs. **R1:** We appreciate the reviewer’s insightful comments. As noted in Subsection 6.1, to ensure the stealthiness of adversarial attacks, we introduce the hyperparameter $\epsilon$ to constrain the magnitude of injected noise. To ensure its effectiveness, $\epsilon$ is set to a very low value. However, these slight perturbations may lead the victim MKC method to learn more robust representations, resulting in a slight improvement in clustering performance. We will clarify this point in the revision. **W2:** In Eq. (16), the authors introduce two hyperparameters to balance three loss terms. However, they do not conduct sensitivity analysis in the evaluation section to demonstrate how these parameters impact the attack performance. **R2:** Thank you for the reviewer’s comments. Based on your suggestion, we conduct a sensitivity analysis on the hyperparameters $\alpha$, $\beta$, and $\gamma$ in Eq. (16), which regulate the balance among loss terms. To ensure comparable magnitudes across these terms, we select $\alpha$ from $\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 0.6\rbrace$, $\beta$ from $\lbrace e^{-5}, e^{-4}, e^{-3} \rbrace$, and $\gamma$ from $\lbrace e^{-5}, 2 e^{-5}, 3 e^{-5}, 4 e^{-5}, 5 e^{-5}, 6 e^{-5}\rbrace$. The analysis is conducted on the MSRCv1 and BBCSport datasets, with results shown in [the following eight figures](https://anonymous.4open.science/api/repo/AdvMKC-rebuttal-CC14/file/sensitivity_analysis.png?v=0fcf01a5). We conducted a total of **216 evaluations**. Due to the limitations of OpenReview.net, we present the evaluations on the following anonymous GitHub repository. For example, as shown in Figure 13, changes in $\alpha$, $\beta$, and $\gamma$ have no significant effect on the MNI metric for the BBCSport dataset. These results suggest that as long as the loss terms remain balanced in magnitude, variations in these parameters have minimal impact on attack performance. Experimental results are presented at https://anonymous.4open.science/r/AdvMKC-rebuttal-CC14/sensitivity_analysis.png. **W3:** Considering this is the first work to assess the adversarial robustness of MKC methods, the authors should release their source code to promote further research. **R3:** We appreciate the reviewer’s comments and acknowledge the concern. The code will be released upon the paper’s acceptance.
Summary: AdvMKC proposes a novel black-box adversarial attack for multi-kernel clustering that employs reinforcement learning—specifically, proximal policy optimization with an advantage function—within a generator-clusterer framework. This approach introduces minimal perturbations to mislead multi-kernel clustering while concurrently reducing computational overhead. Comprehensive theoretical analysis and experimental evaluations across several datasets and different MKC variants verify the method's efficacy, robustness, and transferability. Claims And Evidence: Yes, the submission’s claims are well-supported by both rigorous theoretical analysis and comprehensive experiments.Yes, the methods and benchmarks are well-suited to the problem. Methods And Evaluation Criteria: Yes, the methods and benchmarks are well-suited to the problem. Theoretical Claims: Yes, the proofs for adversarial impact on MKC are verified. Experimental Designs Or Analyses: Yes Supplementary Material: No supplementary material. Relation To Broader Scientific Literature: Yes. The paper extends established adversarial and clustering work by integrating reinforcement learning, perturbation strategies, and a generator-clusterer framework to robustify MKC. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. The study introduces the first black-box adversarial evaluation for multi-kernel clustering and presents an innovative reinforcement learning-based attack framework, AdvMKC. 2. The theoretical insights provided indicate that the impact of adversarial perturbations is governed by both their magnitude and frequency. 3. The proposed method has been rigorously validated on a diverse range of benchmark datasets. Weaknesses: 1. Although the generator-clusterer framework is claimed to reduce computational costs, prior research has already enhanced multi-view clustering efficiency, and clustering processes typically do not entail significant runtime overhead. It would be beneficial for the authors to report the actual running times of each algorithm to substantiate this claim. 2. While the effects of parameters such as the selected sample count ($N_d^p$), perturbed view count ($N_k^p$), and perturbation magnitude ($\epsilon$) on attack performance are examined, the loss function in Equation (16), which comprises four terms and three hyper-parameters, lacks a comprehensive sensitivity analysis. 3. The occurrence of negative values in the ARI metric, as illustrated in Table 2, requires clarification on whether these outcomes are expected. Other Comments Or Suggestions: None Questions For Authors: Please refer to the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely appreciate Reviewer rAQk's thorough and constructive review. We provide point-by-point responses to the raised weaknesses as follows:** --- **W1:** Although the generator-clusterer framework is claimed to reduce computational costs, prior research has already enhanced multi-view clustering efficiency, and clustering processes typically do not entail significant runtime overhead. It would be beneficial for the authors to report the actual running times of each algorithm to substantiate this claim. **R1:** We appreciate the reviewer’s insightful comments. We evaluate the time consumption of AdvMKC compared to two baseline methods under the same experimental settings as in Table 1. The results, presented in [the following table](https://anonymous.4open.science/api/repo/AdvMKC-rebuttal-CC14/file/running_time.png?v=12f17fd6), show that AdvMKC requires less time while achieving the highest attack performance among all methods. We conducted a total of **126 evaluations**. Due to the rebuttal's space limitations, we provide the evaluations via the following anonymous GitHub link. For example, when attacking the SMKKM MKC method on the MSRCv1 dataset, AdvMKC completes in 640.28 seconds, which is significantly lower than the 1259.92 seconds and 1162.00 seconds required by RAMKC and EAMKC, respectively. We will include these experimental results in the revision. Experimental results are presented at https://anonymous.4open.science/r/AdvMKC-rebuttal-CC14/running_time.png. **W2:** While the effects of parameters such as the selected sample count ($N_d^p$), perturbed view count ($N_k^p$), and perturbation magnitude ($\epsilon$) on attack performance are examined, the loss function in Equation (16), which comprises four terms and three hyper-parameters, lacks a comprehensive sensitivity analysis. **R2:** Thank you for the reviewer’s comments. Based on your suggestion, we conduct a sensitivity analysis on the hyperparameters $\alpha$, $\beta$, and $\gamma$ in Eq. (16), which regulate the balance among loss terms. To ensure comparable magnitudes across these terms, we select $\alpha$ from $\lbrace 0.1, 0.2, 0.3, 0.4, 0.5, 0.6 \rbrace$, $\beta$ from $\lbrace e^{-5}, e^{-4}, e^{-3} \rbrace$, and $\gamma$ from $\lbrace e^{-5}, 2 e^{-5}, 3 e^{-5}, 4 e^{-5}, 5 e^{-5}, 6 e^{-5} \rbrace$. The analysis is conducted on the MSRCv1 and BBCSport datasets, with results shown in [the following eight figures](https://anonymous.4open.science/api/repo/AdvMKC-rebuttal-CC14/file/sensitivity_analysis.png?v=0fcf01a5). We conducted a total of **216 evaluations**. Due to the limitations of OpenReview.net, we present the evaluations on the following anonymous GitHub repository. For example, as shown in Figure 13, changes in $\alpha$, $\beta$, and $\gamma$ have no significant effect on the MNI metric for the BBCSport dataset. These results suggest that as long as the loss terms remain balanced in magnitude, variations in these parameters have minimal impact on attack performance. Experimental results are presented at https://anonymous.4open.science/r/AdvMKC-rebuttal-CC14/sensitivity_analysis.png. **W3:** The occurrence of negative values in the ARI metric, as illustrated in Table 2, requires clarification on whether these outcomes are expected. **R3:** We appreciate the reviewer’s comments. The ARI can take negative values because it accounts for randomness in clustering assignments. It adjusts the Rand Index by considering the expected similarity between two random clusterings. If the observed clustering is worse than a random assignment, the ARI can be negative. We will include this clarification in the revision.
Summary: This manuscript addresses the underexplored vulnerability of MKC methods to adversarial perturbations. To evaluate the adversarial robustness of MKC in a black-box setting, the authors propose AdvMKC, a novel framework grounded in reinforcement learning. AdvMKC employs proximal policy optimization with an advantage function to ensure stable optimization and mitigate clustering instability. It introduces a generator-clusterer architecture, where the generator produces adversarial perturbations and the clusterer simulates MKC behavior, thereby reducing computational overhead. Experimental results demonstrate that AdvMKC is effective, robust, and transferable across various MKC techniques. Claims And Evidence: Yes, all claims presented in the manuscript are substantiated by experimental results and corresponding proofs. Methods And Evaluation Criteria: Yes, adversarial attacks on MKC methods have been insufficiently explored, and the proposed method is the first to address this issue. The datasets used in the evaluations are standard in MKC research. Theoretical Claims: I have reviewed the proof in the Appendix and found no issues. Experimental Designs Or Analyses: I have reviewed the evaluation design and found that the authors conduct thorough assessments to evaluate the proposed method's effectiveness, robustness, and transferability. Supplementary Material: No, the authors do not provide any supplementary materials. Relation To Broader Scientific Literature: This paper examines the adversarial robustness of MKC methods, an area that has not been previously studied. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths 1. This manuscript is the first to investigate the adversarial robustness of MKC methods in a black-box setting, where an attacker introduces imperceptible, targeted perturbations to deceive unknown MKC methods. 2. The manuscript provides valuable theoretical insights into the effects of adversarial perturbations on MKC methods. 3. The authors perform comprehensive evaluations across seven benchmark datasets and eleven MKC methods to assess the effectiveness, robustness, and transferability of the proposed method. Weaknesses 1. The authors do not provide the code. If released, it would enhance reproducibility and be beneficial for the research community. 2. When evaluating the transferability of adversarial perturbations generated by the proposed method, the authors should clarify the meaning of the surrogate MKC method referenced in Table 3. 3. The authors should provide further details on the experimental setup, such as the specific view IDs used in adversarial attacks, given that each dataset comprises multiple data views. 4. In figures such as Figure 2 and Figure 3, the ACC metric curve is not clearly presented. However, in certain figures, such as Figure 10, it is clear that the four lines represent four distinct metrics. The authors should address this issue and provide clarification. Other Comments Or Suggestions: In Figure 1, the font size of some text is too small. I recommend increasing the font size for better clarity. Questions For Authors: Please consider addressing the points raised in the above weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely appreciate Reviewer W7KP’s thorough and constructive review. We provide point-by-point weaknesses to the raised questions as follows:** --- **W1:** The authors do not provide the code. If released, it would enhance reproducibility and be beneficial for the research community. **R1:** We appreciate the reviewer’s comments and acknowledge the concern. The code will be released upon the paper’s acceptance. **W2:** When evaluating the transferability of adversarial perturbations generated by the proposed method, the authors should clarify the meaning of the surrogate MKC method referenced in Table 3. **R2:** Thank you for the reviewer’s insightful comments. In Subsection 6.4, we consider a scenario where frequent queries to the victim MKC method are impractical. To address this, the attacker can utilize an alternative MKC method, referred to as the surrogate MKC method, to approximate the victim MKC method’s functionality. This allows the attacker to optimize the perturbation generator to produce adversarial noise. We will clarify the surrogate MKC method in the revision. **W3:** The authors should provide further details on the experimental setup, such as the specific view IDs used in adversarial attacks, given that each dataset comprises multiple data views. **R3:** We appreciate the reviewer’s comments. The evaluation details, including datasets, MKC methods, and compared methods, are provided in Appendix C. We conduct evaluations across seven datasets: MSRCv1, BBCSport, Caltech101-7, HW-6Views, Citeseer, NUS-WIDE-SCENE, and ProteinFold. To ensure a balanced evaluation, we select data views with low dimensionality and limit the total number of perturbed views to less than 50%. Specifically, for the MSRCv1 dataset, we perturb the first and fourth views, while for BBCSport and Citeseer, only the first view is perturbed. In the ProteinFold dataset, we perturb the first, second, third, and fourth views, whereas, in Caltech101-7, only the first and second views are affected. For HW-6Views, the perturbation is applied to the third and fourth views, and for NUS-WIDE-SCENE, the second and fifth views are perturbed. Additional details can be found in Table 4. We will further clarify this information and provide additional evaluation details in the revision. **W4:** In figures such as Figure 2 and Figure 3, the ACC metric curve is not clearly presented. However, in certain figures, such as Figure 10, it is clear that the four lines represent four distinct metrics. The authors should address this issue and provide clarification. **R4:** Thank you for the reviewer’s insightful comments. As shown in Tables 3 and 4, the accuracy and purity metrics coincide in some cases. Since clustering is an unsupervised task, both metrics rely on the best possible label assignment. The accuracy metric determines the optimal one-to-one mapping between cluster IDs and ground truth labels using the Hungarian algorithm [1], whereas the purity metric assigns each cluster to the most frequent ground truth label. When clusters closely align with single ground truth classes (i.e., each cluster predominantly contains data points from one class), both metrics yield identical values. In such cases, the best mapping in Hungarian accuracy and the dominant-class assignment in purity produce the same results. We will include further explanations in the revision. [1] Kuhn et al. The Hungarian Method for the Assignment Problem. 1955.
Summary: The paper investigates the adversarial robustness of MKC in a black-box setting, a largely unexplored area. It introduces AdvMKC, a novel reinforcement-learning-based attack framework that injects imperceptible perturbations to mislead MKC methods. AdvMKC employs proximal policy optimization with an advantage function to handle clustering instability and utilizes a generator-clusterer framework to approximate MKC behavior while reducing computational costs. Theoretical analysis and extensive experiments on seven datasets and eleven MKC methods confirm the effectiveness, robustness, and transferability of AdvMKC. Claims And Evidence: The claims are generally supported by theoretical analysis and extensive experiments. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem. Theoretical Claims: Yes Experimental Designs Or Analyses: Yes, the experiments on seven datasets and eleven MKC methods are reviewed. Supplementary Material: No Relation To Broader Scientific Literature: The paper extends adversarial robustness research to MKC, building on prior work in adversarial attacks and kernel methods. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths • This paper introduces AdvMKC, a reinforcement-learning-based attack with a perturbation generator and clusterer for efficient optimization. • This paper provides insights into the impact of adversarial perturbations on MKC performance. • This paper validates findings through experiments on seven datasets and eleven MKC methods, demonstrating effectiveness, robustness, and transferability. Weaknesses • This paper assumes the victim MKC method operates as a black box with no direct access. Given this realistic constraint, where frequent queries are not feasible, how does the proposed attack function effectively in such a scenario? • Figures 11–14 show that as the number of clustering operations increases, AdvMKC outperforms the two baseline methods in attack effectiveness. However, in some cases, AdvMKC does not achieve the best performance during the initial attack phase. The authors should provide clarification on these observations. Other Comments Or Suggestions: NA Questions For Authors: See above “Weakness”. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely appreciate Reviewer yajj’s thorough and constructive review. We provide point-by-point responses to the raised weaknesses as follows:** --- **W1:** This paper assumes the victim MKC method operates as a black box with no direct access. Given this realistic constraint, where frequent queries are not feasible, how does the proposed attack function effectively in such a scenario? **R1:** We appreciate the reviewer’s insightful comments. Conducting attacks on multi-view clustering requires extensive queries to the victim MKC method, resulting in high resource consumption. To mitigate this, we propose two countermeasures: 1. We design a clusterer that emulates MKC behavior. During attacks, this clusterer provides rewards, eliminating the need to query the victim black-box MKC method. As demonstrated in Appendix D, AdvMKC achieves the highest attack performance with the same number of victim MKC queries. 2. When querying the victim MKC method is excessively costly, we propose an alternative approach: querying a surrogate MKC method under the attacker's control. As shown in Table 3, optimizing the generator using the surrogate MKC method enables the crafted perturbations to effectively mislead the victim MKC method. We will clarify this issue in the revision. **W2:** Figures 11–14 show that as the number of clustering operations increases, AdvMKC outperforms the two baseline methods in attack effectiveness. However, in some cases, AdvMKC does not achieve the best performance during the initial attack phase. The authors should provide clarification on these observations. **R2:** We appreciate the reviewer’s comments. The performance difference stems from the distinct generation methods used in AdvMKC compared to the other two approaches. As shown in Appendix C, RAMKC injects Gaussian noise into the original data, while EAMKC optimizes the mean and variance of noise distributions using the LM-CMA strategy and a reward function. In contrast, AdvMKC employs a neural network to generate perturbations. Due to the initial parameter settings, the perturbation magnitude may be small at the beginning, limiting AdvMKC’s performance in the early attack phase. However, once the memory buffer $\mathcal{B}$ is filled, the attacker can optimize the generator’s parameters, resulting in improved attack performance. We will address this concern in the revision.
null
null
null
null
null
null
GoIRL: Graph-Oriented Inverse Reinforcement Learning for Multimodal Trajectory Prediction
Accept (poster)
Summary: In this paper, the authors introduce a Graph-oriented Inverse Reinforcement Learning (GoIRL) framework for multimodal trajectory prediction. Specifically, (1) to capture the complex scene context in a structured manner, they use vectorized representations of the environment (scene features), (2) to integrate detailed lane information into the prediction model, they aggregate lane-graph features into a grid space, allowing the model to understand the spatial relationship and constraints of the driving environment, (3) they utilizes a hierarchical parameterized trajectory generator for improving prediction accuracy and an Markov chain Monte carlo (MCMC)-augmented probability fusion for boosting prediction confidence. ## update after rebuttal I gave a 'weak accept" score because Combining the IRL and graph-based context encoding is a good try in trajectory prediction. Thanks. Claims And Evidence: Overall, claims made in this paper are supported by evidence. 1. The GoIRL framework integrates MaxEnt IRL with vectorized context representations has corresponding detailed explanation. The Section 3 provides detailed descriptions of the framework and related operations. 2. The claim about the performance “achieves state-of-the-art performance on the large-scale Argoverse & nuScenes” is supported by the quantitative results reported in Table 1, 2, and 3. In these tables, the authors compare their solutions with other existing solutions. There are several recent works shown in the Argoverse motion forecasting page https://paperswithcode.com/sota/motion-forecasting-on-argoverse-cvpr-2020. It will be better to compare the proposed solution with some existing solutions rank top on this page. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the trajectory prediction task. It will be better, if the authors could report the accuracy for each prediction step. In this case, readers may get more sense about the performance of the proposed solution, e.g., the model performs better at the first few steps and the performs worse at the last step, etc. Theoretical Claims: This paper does not have challenging mathematical proofs or theoretical claims. The model operations and loss functions used in this paper look reasonable and should work for the trajectory prediction task. Experimental Designs Or Analyses: Overall, the experimental designs and analyses make sense. However, as mentioned above, it will be better to compare the proposed solution with more existing solutions to prove the effective of the proposed solution. Supplementary Material: The supplementary material provides more details of the proposed solution, e.g., implementation details, training details, visualization results, and more discussion. Relation To Broader Scientific Literature: The proposed solution combines the Inverse Reinforcement Learning with graph-based context to improve prediction performance and generalization. To achieve this goal, there are several components of this paper were adapted from existing work. For example: 1. Context Encoder: the Dilated LaneConv operator proposed in LaneGCN is applied. 2. Policy Generator: followed the MaxEnt IRL framework. Essential References Not Discussed: It will be better if the authors could discuss more recent work, for example: * Lan, Zhiqian, Yuxuan Jiang, Yao Mu, Chen Chen, and Shengbo Eben Li. "Sept: Towards efficient scene representation learning for motion prediction." arXiv preprint arXiv:2309.15289 (2023). * Zhou, Zikang, Zihao Wen, Jianping Wang, Yung-Hui Li, and Yu-Kai Huang. "Qcnext: A next-generation framework for joint multi-agent trajectory prediction." arXiv preprint arXiv:2306.10508 (2023). * Yao, Yue, Shengchao Yan, Daniel Goehring, Wolfram Burgard, and Joerg Reichardt. "Improving Out-of-Distribution Generalization of Trajectory Prediction for Autonomous Driving via Polynomial Representations." In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 488-495. IEEE, 2024. Other Strengths And Weaknesses: Other Strengths: 1. Combining the IRL and graph-based context encoding is a good try in trajectory prediction. The purpose of the usage of the inference learning is explained clearly and makes sense. 2. The framework design should be generalize well to new data and different domains. Other Weaknesses: 1. It will be better to report the complexity of the proposed solution. The integration of multiple components make the framework complex, and may involve challenges in real world use case. Reporting the complexity/latency of the solution will be helpful to readers. 2. The performance of the model may rely on the quality of training data. For example, the accuracy of the lane-graph feature, observed trajectories, etc. It will be better to report the performance with noisy inputs. Other Comments Or Suggestions: 1. It will be better to report some failure cases, and make some discussions about the failures. 2. The interpretability of the proposed model is also a concern. It will be better to come up with a way to improve the interpretability of model’s predictions. Questions For Authors: 1. Do we need to use different grid space for different scenes/datasets? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thorough and thoughtful feedback. We are grateful for the recognition of our work’s motivations, technical contributions, framework designs, and experimental results, as well as for the constructive suggestions for improvement. Below, we address each of the reviewer’s comments in detail. 1. **Accuracy at Each Prediction Step.** Thank you for your valuable suggestion. The following table (https://anonymous.4open.science/api/repo/Anonymous-F687/file/Tab1.png) presents the Displacement Error (DE) per prediction step for the forecasted trajectory. As expected, the model performs more accurately in the initial steps, with errors increasing in later steps. This trend aligns with the inherent uncertainty of long-horizon predictions. We will incorporate this analysis into the revised manuscript. 2. **Recent Solutions & References.** We greatly appreciate the reviewer’s recommendations for recent solutions. The referenced works propose different paradigms: - Ref. [1] employs a self-supervised approach leveraging all data (including the test set) for pertaining. - Ref. [2] introduces a powerful query-centric paradigm for scene encoding within a supervised learning paradigm. - Ref. [3] focuses on improving out-of-distribution generalization by introducing a novel evaluation protocol. Our approach differs in that it emphasizes a graph-based IRL framework to address covariate shift while maintaining competitive performance on standard trajectory prediction benchmarks. We will cite and discuss these works in the revised manuscript. 3. **Model Inference Latency.** Thank you for this important consideration. Our model achieves an average inference latency of approximately 30 ms on an NVIDIA RTX 3090 GPU using a standard PyTorch implementation. We will include this information in the revision. 4. **Performance with Noisy Inputs.** We appreciate the suggestion to evaluate our model’s robustness. To assess this, we introduced Gaussian noise with zero mean and varying standard deviations (STD) to the inputs. The results, summarized in the following table (https://anonymous.4open.science/api/repo/Anonymous-F687/file/Tab2.png), indicate that while significant noise degrades prediction accuracy, minor perturbations have minimal impact. This suggests that our model exhibits a degree of robustness to moderate input noise. We will include these findings into the revised manuscript. 5. **Failure Cases.** Thank you for the valuable suggestion to analyze failure cases. We provide several representative failure scenarios, which can be accessed at https://anonymous.4open.science/api/repo/Anonymous-F687/file/Figure1.jpg. - Figure 1(a) illustrates how poor observations of static or slow-moving agents can negatively impact trajectory predictions. - Figure 1(b) demonstrates that inaccuracies in speed prediction can affect longitudinal accuracy. - Figure 1(c) highlights a limitation where the model fails to predict a lane change in the absence of explicit cues. We believe these examples offer some insights into the conditions under which our model may struggle. These failure cases, along with a discussion of their potential causes, will be included in the revised manuscript. 6. **Model Interpretability.** We sincerely appreciate your insightful comment. Unlike fully supervised models that solely learn from data distributions, our graph-based IRL approach follows the MaxEnt IRL framework, which provides a learned reward function. This offers meaningful insights into the model’s decision-making process and allows for the integration of explicit constraints within the reward function, potentially enhancing interpretability. As discussed in Appendix C, we provide preliminary analysis on this aspect, and we recognize that further exploration of interpretability remains an important avenue for future research. 7. **Grid Resolution for Different Scenes.** Thank you for this instructive question. The choice of grid resolution involves a trade-off between efficiency and accuracy. In our implementation, we simply use a fixed grid resolution for both the Argoverse and nuScenes datasets, complemented by a refinement module to mitigate its impact. However, adapting the grid resolution to the specific driving scene could further improve performance. **References** [1] SEPT: Towards Efficient Scene Representation Learning for Motion Prediction. [2] QCNeXt: A Next-Generation Framework for Joint Multi-Agent Trajectory Prediction. [3] Improving Out-of-Distribution Generalization of Trajectory Prediction for Autonomous Driving via Polynomial Representations. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. The authors responded to my questions in a great way. They addressed my concerns. I do not have additional questions. Thank you so much. --- Reply to Comment 1.1.1: Comment: Thank you very much for taking the time to review our work. We sincerely appreciate your encouraging feedback and the opportunity to address your concerns. Your insights have been instrumental in helping us refine and strengthen our manuscript. We are truly grateful for your thoughtful support.
Summary: This paper focuses on the problem of trajectory prediction, which is hard because of the inherent uncertainty and underlying multimodality. Previous method mainly focuses on behavior cloning, which has been shown to have a covariant shift problem. Therefore, this paper proposes to use IRL to solve this problem. In particular, the authors propose a Graph-oriented Inverse Reinforcement Learning (GoIRL) framework, which is an IRL-based predictor equipped with vectorized context representations. They develop a feature adaptor to effectively aggregate lane-graph features into grid space, enabling seamless integration with the maximum entropy IRL paradigm to infer the reward distribution and obtain the policy. Extensive experimental results show that the proposed approach not only achieves state-of-the-art performance on the large-scale Argoverse & nuScenes motion forecasting benchmarks but also exhibits superior generalization abilities compared to existing supervised models. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. all of them Relation To Broader Scientific Literature: Trajectory prediction is an important task in autonomous driving. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. In general, this paper is well-written and easy to follow. The figures are clear and illustrative to understand. 2. The idea of using IRL to solve the trajectory prediction task is interesting and promising, it has the potential to avoid covariant shift problems in behavior cloning. Weaknesses: 1. Some recent trajectory prediction methods are missing. Although this paper focuses on graph-based representation and IRL, it focuses on a well-studied area that has rapid development and new algorithms. I think one reason is that the most recent work focuses on the waymo dataset, which is larger than argoverse and nuScenes. I think the authors should also include results on the Waymo dataset and compare the SOTA methods in the leaderboard. In particular, I think autoregressive models with tokenized representation have recently become popular and promising and should be considered as baselines. 2. Not enough qualitative examples are shown in the paper. Figures 4, 5, and 7 only show the multi-modal prediction of a single vehicle scenario. I think the biggest problem in trajectory prediction is learning the interaction between diverse agents. The authors may need to show more examples of the interactive scenarios with multiple agents, together with the comparison with baseline models. 3. Loss function (12) might be too complex to tune for different datasets. Other Comments Or Suggestions: No. Questions For Authors: 1. The covariate shift example in Figure 1 is confusing. If the barricade is included in the observation and there are training samples that contain similar situations, I think the predictor will only predict a left turn. If there is no barricade seen in the training sample, then the model fails because of the out-of-distribution problem rather than the covariant shift. 2. “We employ the target-centric coordinate system, where all context instances and sequences are normalized to the current state of the target agent through translation and rotation operations.” The efficiency of using “target-centric” representation could be low when we need to predict the trajectory of all agents. Can this proposed method be integrated into query-centric representation? 3. On page 7 line 375, the authors say “Furthermore, we adopt an ensemble strategy (Zhang et al., 2024) to boost prediction performance”. How does this ensemble work? Do methods in Table 3 also use ensemble? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s recognition of our research contributions and writing clarity, as well as the constructive feedback and valuable suggestions for improvement. Below, we address the reviewer’s concerns in detail. 1. **Recent Trajectory Prediction Methods.** Thank you for your thoughtful suggestions. We would like to clarify our choice of datasets. Since our work focuses on empowering IRL with vectorized representations, we selected Argoverse and nuScenes for evaluation based on benchmarking considerations: the graph-based model (DSP [1]) was originally evaluated on Argoverse, while P2T [2], to our knowledge, the only IRL-based trajectory predictor on the public leaderboard, was benchmarked on nuScenes. To ensure fair and meaningful comparisons, we adopted these two datasets as our benchmarks. Nonetheless, we acknowledge the importance of the Waymo dataset and will consider extending our experiments to Waymo in future work. Regarding autoregressive models for trajectory prediction, we include Autobot [3], an autoregressive approach on Argoverse, for comparison. As shown in the table below, our method remains competitive, demonstrating the effectiveness of IRL-based approaches. |Method|brier-minFDE$_{6}$|Brier score|minFDE$_{6}$|minADE$_{6}$|MR$_{6}$| |:-|:-:|:-:|:-:|:-:|:-:| |AutoBot|2.057|0.685|1.372| 0.876|0.164| |GoIRL (Ours)|1.796|0.623|1.173|0.809|0.120| 2. **Qualitative Examples.** We apologize for any confusion regarding our qualitative results. The scenarios presented in Figures 4, 5, and 7 include multiple traffic participants; however, for clarity, we omitted other agents to better highlight the target vehicle’s predictions. A more comprehensive visualization, including all agents, is available in the supplementary video. We will revise the manuscript to explicitly state this and include additional examples in the appendix to better illustrate interactive scenarios with multiple agents. 3. **Loss Function.** Thank you for your insightful comment. Our loss function follows a standard formulation for two-stage trajectory prediction models, comprising regression losses for both proposal and refined trajectories, along with a classification loss for probability estimation. In practice, we use fixed values of $\alpha=$ 1, $\beta=$ 1, and $\gamma =$ 3 in Eq. (12), which do not require extensive tuning across datasets. We will clarify this in the revised manuscript. 4. **Covariate Shift Example in Figure 1.** We appreciate the reviewer’s insightful comment and apologize for any confusion. The example in Figure 1 is intended to illustrate a covariate shift scenario in which the drivable area attribute changes. Herein, the barricade represents a constraint indicating that the region behind it is undrivable. In the training dataset, the model learns from observed drivable area annotations and the corresponding labels of ground-truth future trajectories. However, during testing, the drivable attribute is altered in the input while the ground-truth trajectory remains unchanged. This setup aligns with the definition of covariate shift [4], where the input distribution changes while the relationship between inputs and labels remains the same. We will clarify this more explicitly in the revision. 5. **Query-Centric Representation.** We greatly appreciate the reviewer’s insightful suggestion. Our IRL framework can indeed be integrated into a query-centric representation, and our ongoing work has demonstrated its feasibility in such settings. However, in this paper, our primary focus is on exploring the benefits of a graph-based representation. We will discuss the potential for extending our approach to query-centric representations in the future work section. 6. **Ensemble Strategy.** Thank you for your thoughtful inquiry. Our ensemble strategy follows a standard approach, where multiple models trained with different random seeds are combined using a weighted k-means algorithm to aggregate the forecasted trajectories. The results presented in Table 3 also incorporate this ensemble strategy, and we will explicitly clarify this point in the revised manuscript. **References** [1] Trajectory Prediction with Graph-based Dual-Scale Context Fusion. [2] Trajectory Forecasts in Unknown Environments Conditioned on Grid-based Plans. [3] Latent Variable Sequential Set Transformers for Joint Multi-Agent Motion Prediction. [4] https://en.wikipedia.org/wiki/Domain_adaptation.
Summary: GoIRL is a graph-based inverse reinforcement learning framework for predicting multiple possible future trajectories in autonomous driving. It integrates lane-graph features into IRL, uses a hierarchical decoder for accurate predictions, and outperforms supervised models on Argoverse and nuScenes benchmarks. ## Update after rebuttal In the rebuttal, the authors included the model’s performance on the AV2 leaderboard, enhancing the comprehensiveness of the evaluation. They also presented quantitative results on Drivable Area Changes, which help illustrate the method’s potential for generalization in such scenarios. However, the performance of GoIRL still lags significantly behind the state-of-the-art method LOF (B FDE 1.63 vs. 1.76). Furthermore, the specific advantage offered by IRL in generalizing to drivable area changes remains unclear. The necessity of addressing this problem through IRL is questionable—alternative approaches, such as prediction models conditioned on the road graph, can also adhere strictly to the drivable area. Overall, given that the absolute performance is not improved and that the claim of generalizability to drivable area changes seems somewhat ill-posed, I assign a weak reject as my final rating. I recommend conducting additional experiments and comparisons focused on more challenging long-tail generalization scenarios (e.g., drivable area changes, sudden pedestrian emergence) to more convincingly demonstrate the benefits of using IRL. A comparison with SOTA models on such challenging setting would significantly strengthen the paper. Claims And Evidence: 1. GoIRL is the first to integrate the MaxEnt IRL paradigm with vectorized context representation. This claim appears to be true and highlights the novelty of the proposed approach. 2. GoIRL proposes a hierarchical parameterized trajectory generator and an MCMC-augmented probability fusion method for improved performance. This claim is well-supported by the ablation study, which clearly demonstrates the performance benefits of each component. 3. It achieves state-of-the-art performance on two large-scale motion forecasting benchmarks. This claim is well-supported by the quantitative results presented in the paper. 4. It demonstrates superior generalization abilities compared to existing supervised models in handling drivable area changes. This claim lacks sufficient evidence. The paper provides only a single qualitative example, which is not enough to conclusively support the generalization claim. Methods And Evaluation Criteria: The proposed method leverages Inverse Reinforcement Learning (IRL) to address compounding error and generalizability issues, which is a reasonable and well-motivated approach. Prior work has demonstrated the effectiveness of IRL in similar contexts. However, the benchmark datasets used in the paper are outdated. Argoverse (2019) and nuScenes (2020) are no longer actively maintained or considered state-of-the-art. More recent and widely adopted benchmarks, such as Argoverse 2 and the Waymo Open Motion Dataset, are not used. As a result, the validity of the claimed improvements is questionable, especially since strong baselines on these newer benchmarks (e.g., MTR, QCNet, LOF) are not included in the comparisons. Additionally, the evaluation metrics are not comprehensive. The paper claims superior generalization ability, yet provides no quantitative experiments to support this assertion. Theoretical Claims: The theoretical claims look good to me in this paper. Experimental Designs Or Analyses: Please see the 'Methods And Evaluation Criteria*' part Supplementary Material: The supplementary material discusses more implementation details of this work. Relation To Broader Scientific Literature: This paper leverages previous findings in Inverse Reinforcement Learning. Essential References Not Discussed: none Other Strengths And Weaknesses: Please see above Other Comments Or Suggestions: none Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s recognition of our work’s motivations, technical contributions, and experimental results, as well as the constructive suggestions for improvement. Below, we address the reviewer’s concerns in detail. ### **1. Choice of Benchmark Datasets & Recent Baselines.** We acknowledge the reviewer’s concerns regarding dataset and baseline selection and provide our response as follows: (1) **Fair Baseline Comparison.** Our choice of Argoverse and nuScenes is primarily motivated by ensuring fair and meaningful comparisons within established benchmarks, as outlined below: - Our supervised graph-based baseline (DSP [1]) was originally evaluated on Argoverse, ensuring a fair and rigorous comparison with our proposed IRL framework. - To the best of our knowledge, nuScenes is the only large-scale benchmark featuring an IRL-based predictor (P2T [2]) on its public leaderboard. Since P2T employs rasterized images as input, evaluating our graph-based IRL model on nuScenes provides a valuable contrast. - Despite being introduced in 2019 and 2020, Argoverse and nuScenes remain widely used, with strong baselines such as LOF [3] and QCNet [4] continuing to benchmark on them. The frequent public submissions further demonstrate their ongoing relevance. (2) **Comparison to Recent Baselines.** Our model is the first IRL-based predictor with a vectorized representation, which primarily focuses on the comparison against other graph-based approaches and the IRL-based predictor with rasterized representations. As shown in the table below, our GoIRL model also achieves comparable brier-minFDE$_6$ and superior Brier score relative to strong supervised models (e.g., LOF & QCNet), demonstrating reliable predictions and competitive performance. As suggested, we will incorporate these recent baselines in our revised manuscript for a more comprehensive comparison. |Method|brier-minFDE$_{6}$|Brier score| |:-|:-:|:-:| |LOF|1.658|0.626| |QCNet|1.693|0.626| |GoIRL (Ours)|1.695|0.569| (3) **Contributions Beyond Benchmarks.** Our work establishes a well-performing IRL-based baseline for trajectory prediction. While we acknowledge the importance of newer datasets, we believe our current evaluation is sufficient and reasonable to validate our contributions. Nevertheless, we plan to explore our method’s potential on additional benchmarks in future work. ### **2. Evaluation for Generalization in Drivable Area Changes.** We appreciate the reviewer’s concern regarding evaluation metrics and have conducted additional quantitative experiments to assess our model’s ability to generalize to drivable area changes. To the best of our knowledge, DSP is the only supervised model that incorporates drivable information. Therefore, we compare our method against DSP by evaluating the success rate across 150 diverse traffic scenarios, each with modified drivable attributes (as described in Section 4.3 of the paper). For each scenario, we randomly alter the drivable attributes near the future ground-truth positions 10 times, yielding 1,500 evaluation cases. The results, summarized in the table below, demonstrate that IRL-based predictors inherently learn interaction dynamics, leading to superior generalization in handling drivable area changes and mitigating domain bias. Moreover, we also provide additional qualitative examples to further illustrate the significance of addressing covariate shift, which can be accessed at https://anonymous.4open.science/api/repo/Anonymous-F687/file/case.jpg. These case studies demonstrate how our GoIRL model effectively handles covariate shifts in terms of drivable area changes in different scenarios. |Method|Success Rate| |:-|:-:| |DSP|15.87%| |GoIRL (Ours)|88.13%| **References** [1] Trajectory Prediction with Graph-based Dual-Scale Context Fusion. [2] Trajectory Forecasts in Unknown Environments Conditioned on Grid-based Plans. [3] FutureNet-LOF: Joint Trajectory Prediction and Lane Occupancy Field Prediction with Future Context Encoding. [4] Query-Centric Trajectory Prediction. --- Rebuttal Comment 1.1: Comment: The authors have updated their comparison with recent baselines, including LOF and QCNet. While GoIRL demonstrates a strong Brier score, it slightly underperforms these recent baselines in terms of brier-minFDE. Additionally, the paper still lacks evaluation on the AV2 leaderboard, and the rationale for focusing only on predictors within the same category (i.e., graph-based or IRL-based) remains unclear—broader comparisons would help contextualize the model’s overall standing in the field. On the other hand, I appreciate the new quantitative results evaluating generalization under drivable area changes. The significant performance gap between GoIRL and DSP in this setting highlights the strength of the proposed approach in capturing interaction dynamics. Given these improvements, I will raise my score to 2. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your time and thoughtful feedback. Thank you again for your encouraging comments regarding our quantitative results on generalization under drivable area changes. As suggested, we have conducted additional evaluations of GoIRL on the Argoverse 2 leaderboard to facilitate broader comparisons. The results, presented in the table below, indicate that our model consistently achieves competitive performance, particularly in terms of the Brier score, which highlights GoIRL’s strength in producing accurate and reliable trajectory forecasts. |Method|brier-minFDE$_{6}$|Brier score| |:-|:-:|:-:| |LOF|1.63|0.56| |QCNet|1.78|0.59| |GoIRL (Ours)|1.76|0.55| While GoIRL may slightly underperform recent strong baselines such as LOF on certain metrics like brier-minFDE$_{6}$, we believe further gains could be realized through targeted hyperparameter fine-tuning. Moreover, the primary goal of this work is to explore IRL as a principled and interpretable alternative learning paradigm for trajectory prediction tasks. In this light, GoIRL’s strong generalization ability in handling drivable area changes underscores the potential of IRL-based approaches for robust motion prediction in real-world scenarios. Thank you once again for your valuable suggestions. We remain committed to improving performance on standard benchmarks and will continue advancing the capabilities of IRL-based trajectory predictors in future work.
Summary: The Graph-oriented Inverse Reinforcement Learning (GoIRL) framework is an IRL-based predictor that utilizes vectorized context representations. The author states that the proposed methods overcome the drawbacks of supervised learning techniques. Additionally, a hierarchical parameterized trajectory generator has been incorporated to enhance prediction accuracy. The experiments were conducted on two real-world datasets: Argoverse and nuScenes. Claims And Evidence: Some claims made in the submission are problematic please refer to the weakness section. Methods And Evaluation Criteria: The method evaluation makes sense for the problem of trajectory prediction. Theoretical Claims: Nil Experimental Designs Or Analyses: The soundness of the experimental designs looks correct. Supplementary Material: No Relation To Broader Scientific Literature: The paper contributes to a real-world application and plays an important role in autonomous driving, intelligent systems, navigation robots, and video surveillance. Essential References Not Discussed: Nil Other Strengths And Weaknesses: Strengths 1. The paper is well-written and easy to follow. The experimentation is thorough, 2. The results on two real-world datasets are promising, outperforming the compared methods. Weakness The following are the weaknesses that need to be addressed to improve the manuscript: Motivation and Claims: 1. In Figure 1, the authors claim that “During the data collection process, the ground-truth trajectory is labeled as going straight.” However, no evidence is provided to support this claim. The authors should offer clarification or cite a source to validate this statement. 2. Furthermore, the authors assert that existing supervised models can hardly react to such changes, which is not entirely accurate [1]. There are existing methods that incorporate collision avoidance behavior. For example, if there is a vehicle in front of the ego-vehicle, many models are designed to predict that the ego-vehicle will follow collision avoidance protocols and choose a safer maneuver, especially if such scenarios are present in the training data. [1] Meng, Dejian, et al. "Vehicle trajectory prediction based predictive collision risk assessment for autonomous driving in highway scenarios." arXiv preprint arXiv:2304.05610 (2023). 3. In line 071, the author states, “Another critical concern associated with the supervised approach is the modality collapse issue.” This arises because the predictor must generate diverse and plausible predictions, yet many supervised methods tend to produce a single trajectory (deterministic prediction) [2]. Additionally, in exp., the author mentions K plans of 6 and 10; however, K = 1 is not included. [2] Bae, Inhwan, Young-Jae Park, and Hae-Gon Jeon. "Singulartrajectory: Universal trajectory predictor using diffusion model." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. 4. Is “recurrently forecasts control points to represent the trajectory” inspired by Graph-TERN [3]? [3] Bae, Inhwan, and Hae-Gon Jeon. "A set of control points conditioned pedestrian trajectory prediction." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 5. 2023. Novelty The main novelty of the paper lies in the use of MaxEnt IRL (Maximum Entropy Inverse Reinforcement Learning) for the multimodal trajectory prediction task. Existing literature [4,5] already addresses MaxEnt IRL in the context of trajectory prediction. The author should clearly distinguish between these methods as well. [3] Deo, Nachiket, and Mohan M. Trivedi. "Trajectory forecasts in unknown environments conditioned on grid-based plans." arXiv preprint arXiv:2001.00735 (2020). [4] T. Hirakawa, T. Yamashita, K. Yoda, T. Tamaki, and H. Fujiyoshi, "Travel Time-dependent Maximum Entropy Inverse Reinforcement Learning for Seabird Trajectory Prediction," in Asian Conference on Computer Vision, pp. 430-435, 2017. [5] T. Hirakawa, T. Yamashita, T. Tamaki, H. Fujiyoshi, Y. Umezu, I. Takeuchi, S. Matsumoto, K. Yoda, "Can AI predict animal movements? Filling gaps in animal trajectories using Inverse Reinforcement Learning," Ecoshere, vol 9, no. 10, pp. e02447, 2018. Experimentation 1. In Table 4, the addition of the Bezier curve results in only a minor improvement. The authors should explain why the Bezier curve contributes so little to the overall performance. 2. Additionally, what is the baseline implementation, and which result in Table 4 corresponds to the baseline? This should be clearly stated to help readers understand the comparative performance of the proposed method. Other Comments Or Suggestions: Nil Questions For Authors: Please see Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s affirmative comments on the experimentation, results, and writing, as well as constructive suggestions and valuable references, which have helped us strengthen our manuscript. Below, we provide detailed responses to each of the concerns. 1. **Clarification of the Statement in Figure 1.** Thank you for your insightful comments. In fact, the motivating example in Figure 1 is abstracted from a real-world scenario in the Argoverse dataset (please see Figure 5 in the paper), where the original ground-truth trajectory follows a straight path. This example illustrates the necessary adaptations a predictor should make when encountering drivable area changes in the driving environment. If the ground-truth trajectory involved a left turn, we would modify the undrivable region at the left-turn entrance accordingly to reflect such domain bias issues. We will clarify this more explicitly in the revised manuscript. 2. **Handling Drivable Area Changes.** We appreciate the reviewer’s concern and would like to further clarify our motivation. Our work focuses on handling drivable area attributes, which differ from object-level obstacles (with tracking IDs or bounding boxes). For instance, certain undetectable obstacles, such as regions behind traffic barricades, may appear empty but remain undrivable due to implicit road rules or physical constraints. In practice, drivable area information can be obtained through techniques like occupancy prediction; in our work, we derive it from HD map annotations in the dataset. Our IRL-based method is designed to handle scenarios where drivable attributes differ from those observed during training, and we will clarify this distinction more explicitly in the revised manuscript. 3. **Deterministic Prediction.** Many thanks for your constructive comment. Our work primarily focuses on multimodal trajectory prediction due to the inherent uncertainty in future intentions. However, we recognize that deterministic prediction remains important. We appreciate the suggested reference (SingularTrajectory [1]) and will include it in our discussion, along with additional experimental results for k=1 in our revised version, as presented in the table below: |Dataset|minFDE$_{1}$|minADE$_{1}$|MR$_{1}$| |:-|:-:|:-:|:-:| |Argoverse|3.365|1.551|0.539| |nuScenes|6.530|3.185|0.822| 4. **Control-Point Based Prediction.** We thank the reviewer for sharing the excellent work, Graph-TERN [2], which presents a similar approach to trajectory representation. We will include it in our references and acknowledge its relevance in the revised manuscript. 5. **Distinction from Existing IRL-Based Predictors.** We greatly appreciate the reviewer’s valuable suggestion. The key novelty of our approach, compared to prior IRL-based predictors, lies in the representation and processing of driving context. Unlike previous IRL-based methods [3, 4, 5] that rely on rasterized imagery inputs, our method employs a vectorized (graph-based) representation, enabling it to capture geometric and semantic information in traffic scenes more effectively. Moreover, prior works such as [4,5] primarily focus on animal trajectory prediction, where motion dynamics differ significantly from those in human-driven traffic scenarios. 6. **Contribution of Bézier Curve Parameterization.** We greatly appreciate the reviewer’s detailed feedback. The primary motivation for adopting Bézier curve-based trajectory parameterization is to ensure kinematic feasibility, rather than solely improving trajectory prediction accuracy. While the performance improvement in standard metrics may be marginal, Bézier curves provide smooth, physically plausible trajectories that align with real-world motion constraints. We will clarify this reasoning in the revised manuscript. 7. **Clarification of Table 4.** Many thanks for your valuable suggestion. Table 4 presents an ablation study quantifying the impact of key components in our trajectory decoder. Each row removes a specific component to isolate its contribution to the overall performance and the last row represents the complete model, incorporating all components. We will make a clear statement in the revised version. **References** [1] SingularTrajectory: Universal Trajectory Predictor Using Diffusion Model. [2] A Set of Control Points Conditioned Pedestrian Trajectory Prediction. [3] Trajectory Forecasts in Unknown Environments Conditioned on Grid-based Plans. [4] Travel Time-dependent Maximum Entropy Inverse Reinforcement Learning for Seabird Trajectory Prediction. [5] Can AI Predict Animal Movements? Filling Gaps in Animal Trajectories using Inverse Reinforcement Learning.
Summary: This paper presents Graph-oriented Inverse Reinforcement Learning (GoIRL), a novel IRL-based prediction framework that leverages vectorized context representations. The framework first extracts features from the vectorized inputs and then transforms them into grid space using a feature adaptor to ensure compatibility with existing IRL methods. To improve efficiency and accuracy, GoIRL employs a hierarchical, parameterized trajectory generator augmented with a refinement module that enhances both prediction accuracy and confidence. Extensive experiments demonstrate the effectiveness of the proposed design. Claims And Evidence: The claims are substantiated by thorough methodological descriptions and comprehensive experimental results. Methods And Evaluation Criteria: Most recent motion forecasting methods adopt behavior cloning paradigms, which struggle with generalization and domain adaptation. These limitations become especially critical when the environment deviates significantly from the training distribution. In contrast, reinforcement learning (RL) offers a principled solution through its reward-driven learning and interaction-based training, rather than merely fitting to data distributions. Moreover, RL is also well-suited to address the modality collapse issues commonly observed in supervised learning approaches. Theoretical Claims: I have reviewed the IRL algorithms presented in the submission and did not identify any issues. Experimental Designs Or Analyses: This submission adopts standard evaluation metrics (ADE, FDE, MR) and widely-used datasets (Argoverse 1 and nuScenes) for experimental validation, which is fair and reasonable. However, given the availability of more challenging and diverse datasets such as Argoverse 2 and Waymo, the authors are encouraged to evaluate the proposed method on these benchmarks further to demonstrate its generalization capabilities better. Supplementary Material: I have examined both the appendix and the supplementary videos provided. Relation To Broader Scientific Literature: This paper contributes to the broader scientific literature on motion forecasting by advancing the integration of inverse reinforcement learning (IRL) with vectorized map representations. Traditional approaches in motion forecasting predominantly follow behavior cloning (BC) paradigms, which are known to suffer from issues such as covariate shift and poor generalization to out-of-distribution scenarios. This work addresses these limitations by leveraging the reward-driven and interaction-based nature of IRL, positioning it as a more robust alternative to BC methods. Essential References Not Discussed: To the best of the reviewer's knowledge, there is no existing work that closely resembles this research. Other Strengths And Weaknesses: This submission is well-motivated, particularly in addressing the issue of covariate shift. It also provides a well-structured discussion of related work, clearly distinguishing between behavior cloning (BC) and inverse reinforcement learning (IRL) approaches. The proposed method is novel in several aspects: it effectively leverages rich information from vectorized representations and introduces a feature adaptor that enables compatibility with IRL methods. Furthermore, the hierarchical parameterized trajectory generator is designed to mitigate the inefficiencies commonly found in grid-based approaches. Other Comments Or Suggestions: N.A Questions For Authors: The main concern is the omission of several commonly used baselines, such as: 1. HiVT: Hierarchical Vector Transformer for Multi-Agent Motion Prediction 2. GAMoNet: Goal Area Network for Motion Forecasting Although some of these methods may not be directly comparable—for instance, due to the use of more powerful scene encoders—they should still be included for completeness and to provide a broader context for comparison. Moreover, given that the proposed method operates in the grid representation space, it would be beneficial to include comparisons with other supervised grid-based approaches to better highlight its advantages: 1. THOMAS: Trajectory Heatmap Output with Learned Multi-Agent Sampling 2. GOHOME: Graph-Oriented Heatmap Output for Future Motion Estimation Lastly, the paper would benefit from including more examples or case studies illustrating covariate shift, to further emphasize the importance and relevance of addressing this issue. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s thorough and professional evaluation of our work. We are grateful for the recognition of our work’s motivations, novelty, contributions, and experimental validation, as well as for the constructive feedback and valuable suggestions for improvement. Below, we provide detailed responses to each of the reviewer’s comments. 1. **Choice of Datasets.** Thank you for your thoughtful suggestion regarding additional datasets. Our current evaluation focuses on Argoverse and nuScenes, as these datasets have been widely used to benchmark graph-based trajectory prediction methods. Specifically, we selected Argoverse to benchmark against graph-based models and nuScenes to compare against P2T, which, to our knowledge, is the only IRL-based trajectory predictor on the public leaderboard. Ensuring fair and meaningful comparisons was our primary motivation for this choice. Nevertheless, we acknowledge the importance of evaluating on more diverse datasets to better demonstrate the effectiveness of our approach and we will explore extending our experiments to diverse datasets in future work. 2. **Additional Baselines.** We appreciate the reviewer’s valuable recommendation to include additional baselines for completeness. In the revised manuscript, we will incorporate comparisons with HiVT [1] and GANet [2], as well as grid-based motion prediction methods such as THOMAS [3], HOME [4], and GOHOME [5]. The table below presents the performance comparison: |Method|brier-minFDE$_{6}$|Brier score|minFDE$_{6}$|minADE$_{6}$|MR$_{6}$| |:-|:-:|:-:|:-:|:-:|:-:| |HiVT|1.842|0.673|1.169|0.774|0.127| |GANet|1.790|0.629|1.161|0.806|0.118| |THOMAS|1.974|0.535|1.439|0.942|0.104| |GOHOME|1.983|0.533|1.450|0.943| 0.105| |HOME+GOHOME|1.860|0.568|1.292| 0.890|0.085| |GoIRL (Ours)|1.796|0.623|1.173|0.809|0.120| |GoIRL-Ens (Ours)|1.695|0.569|1.126|0.783|0.110| 3. **Additional Case Studies on Covariate Shift.** Thank you for your insightful suggestion. To further illustrate the significance of addressing covariate shift, we provide additional qualitative examples, which can be accessed at https://anonymous.4open.science/api/repo/Anonymous-F687/file/case.jpg. These case studies demonstrate how our GoIRL model effectively handles covariate shifts in terms of drivable area changes in different scenarios. **References** [1] HiVT: Hierarchical Vector Transformer for Multi-Agent Motion Prediction. [2] GANet: Goal Area Network for Motion Forecasting. [3] THOMAS: Trajectory Heatmap Output with Learned Multi-Agent Sampling. [4] HOME: Heatmap Output for Future Motion Estimation. [5] GOHOME: Graph-Oriented Heatmap Output for Future Motion Estimation.
null
null
null
null
Phase transitions for the existence of unregularized M-estimators in single index models
Accept (poster)
Summary: This work considers the problem of the existence of M-estimators in the proportional high-dimensional where the number of samples $n$ and covariate dimensions $p$ diverge at fixed rate $n/d\to\delta$. The main result is to establish a sharp frontier $\delta_{\infty}$ separating regimes where the probability of existence of a minimizer is asymptotically zero or one. This is achieved by exploiting a duality between the existence problem and an optimization problem in a Hilbert space, for which standard techniques apply. Claims And Evidence: All the claims are followed by rigorous mathematical proofs. Numerical simulations are also provided as an illustration of the result in particular cases. Methods And Evaluation Criteria: N/A. Theoretical Claims: I did not carefully check the proofs, but I did skim through them and did not find any red flag. The proof scheme builds on well-known previous work in the literature. Experimental Designs Or Analyses: N/A. Supplementary Material: I skimmed through all the supplementary material. It is well-written and enough detail is provided. Relation To Broader Scientific Literature: Although there is not a classical "Related Works" section, the authors do a good job in situating their results within the literature. Their work is motivated by (Sur & Candès 2019; 2020), and build on previous contributions from (Montanari et al. 2023; Bellec & Koriyama 2023; Thrampoulidis et al. 2018), which are extensively acknowledged. Essential References Not Discussed: The only missing works I would add are: [1] which was the first work to show that $n=2*d$ Gaussian points can be linearly separated at large $d$ and [2] who generalized this result to linear separators with a given fixed margin. They are not essential, but are foundational works concerning the question addressed here. - [1] Thomas M Cover. Geometrical and statistical properties of systems of linear inequalities with applica- tions in pattern recognition. IEEE transactions on electronic computers, (3):326–334, 19651964 - [2] E Gardner and B Derrida. Optimal storage properties of neural network models. 1988 J. Phys. A: Math. Gen. 21 271 Other Strengths And Weaknesses: Overall, I think this is a solid and well-written contribution. It clearly states the results, how it fits the literature and delivers what is promised, without overclaiming. Of course, this work has all sorts of the typical limitations related to exact asymptotic works (particular data distribution, convexity of the risk, etc.). But in my opinion these are all minor. Other Comments Or Suggestions: I would suggest the authors add a small discussion on the statistical implications of their findings. For instance, from the formula for $\delta_{\infty}$, can you say anything on the class of link functions which lead to higher or smaller threshold? What phenomenology can we draw out from the theoretical result? Questions For Authors: Perhaps this is a naif question, but where does the limitation $\delta\in(1,\infty)$ come from in the proof? Is it clear that $\delta_{\infty}>1$ for any convex loss function? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the additional references. We agree that the phase transition phenomena studied in our paper are connected to these earlier works in statistical physics. We addressed this point in our rebuttal to reviewer LDvE; in a way our results are complementary and fill a gap. We will add the references given in the review to the camera-ready version. **Regarding the assumption $\delta>1$** The assumptions $\delta>1$ appears in several places. First, we used the assumption $\delta>1$ for the infinite-dimensional optimization problem in equation (5) to be well-defined, where the constraint is given by $\|v\| - \mathbb{E}[vG]/\sqrt{1-\delta^{-1}}\le 0$. Second, $\delta=1$ allows for instance $p=n$ whence $X\in R^{n\times p}$ is almost surely invertible. In this case, for a given $y_i$, one can always find a vector $b\in R^p$ such that $x_i^Tb$ has a given sign for each $x_i$, so the problem is trivially separable. Finally, $\delta\le 1$ allows for instance $p\ge n+1$, in which case the M-estimator (if it exists) is not unique (i.e., several solutions to the minimization problem in $R^p$ exists). Further structural assumptions are then needed to have a well-defined M-estimator, for instance by considering the solution with minimal L2 norm. **On question regarding the lower bound $\delta_\infty \ge 1$**: It is always true for any convex function and response structure of $(x_i, y_i)$ as long as assumption 2.4 is satisfied. This is because the threshold $\delta_\infty$ is defined as $$ \delta_\infty = 1/ \Bigl(\inf_{t\in \mathbb{R}}\varphi(t)\Bigr) \quad \text{with} \quad \varphi(t) = \mathbb{E}[(G+Ut)^2 I_{\Omega_\vee(Y)}] + \mathbb{E}[(G+tU)^2_+ I_{\Omega_↗(Y)}] + \mathbb{E}[(G+tU)^2_- I_{\Omega_↘(Y)}]. $$ Here, by $x_{-}^2 \le x^2$ and $x_{+}^2\le x^2$, $\varphi(t)$ can be upper bounded as $$ \varphi(t) \le \mathbb{E}[(G+tU)^2] = 1 + t \mathbb{E}[U^2], \quad \forall t\in \mathbb{R}, $$ so taking $\inf_{t\in \mathbb{R}}$ gives the upper estimate $\inf_{t\in \mathbb{R}} \varphi(t) \le 1$. Combined with the definition of $\delta_\infty$, we get $\delta_\infty \ge 1$. In terms of statistical interpretation, the threshold $\delta_\infty$ decreases when the loss becomes coercive for more realizations of $Y$. This is because $(G + tU)^2$ dominates the other terms $(G + tU)^2_+$ and $(G + tU)^2_-$ pointwise, so $\varphi(t)$ increases with $1_{\Omega_\vee(Y)}$ for each fixed $t$, thereby reducing $\delta_\infty = 1 / \inf_t \varphi(t)$. This behavior is illustrated in Figure 3. For the Binomial loss with $q$ repeated measurements, $\Omega_\vee(Y)$ is equal to the event $\{0 < Y < q\}$. Consequently, $\delta_\infty$ decreases with larger $q$, as observed in the figure. We will incorporate the above discussion into the camera-ready version.
Summary: This paper studies the phase transitions for M-estimators in single index models. Prior work has demonstrated that there exist a threshold $\delta_{\infty}$, such that when $n/p \to \delta$, then the M-estimator exists with high-probability when $\delta > \delta_{\infty}$ while the M-estimator does not exist when $\delta < \delta_{\infty}$. However the prior results only apply to binary logistic regression, while this work generalize this result to other single-index models. Another contribution of this paper is that, there is a corresponding nonlinear system which governs the asymptotic behaviour of the M-estimator, but the existence of solution to this system for $\delta > \delta_{\infty}$ is unproven. The authors address this gap by proving the existence of such solution when $\delta > \delta_{\infty}$. Claims And Evidence: Yes, the two major claims are supported by clear and convincing evidence. 1. the M-estimator exists with high-probability when $\delta > \delta_{\infty}$ while the M-estimator does not exist when $\delta < \delta_{\infty}$ for general single index models. The claim is justified by theorem 2.6. 2. the existence of the solution to the critical linear system when $\delta > \delta_{\infty}$. The claim is justified by theorem 2.7. Methods And Evaluation Criteria: This paper does not involve evaluation. Theoretical Claims: I have not check the correctness in detail. But the reasoning in the main text, where the authors relate the proof of Theorem 2.7 to infinite dimensional optimization problems seems to be reasonable. Experimental Designs Or Analyses: The experiments in section4 is sound and supports theorem 2.6. Theorem 2.7 is not supported by experiments. Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: I am not an expert in this area, so I am not sure how the theory for single index model can affect the broader community. Essential References Not Discussed: I do not find essential related works not discussed. Other Strengths And Weaknesses: Strengths: The authors generalize the theory of phase transition in single index models from binary logistic regression to general models, which is of good significance. The theory for the existence of solution regarding the nonlinear systems is also of great importance. Weaknesses: no obvious weaknesses. Other Comments Or Suggestions: Is it possible that in the introduction, you briefly introduce how single index model is applied in machine learning? And why the phase transition is important. Questions For Authors: What will happen when $\delta = \delta_{\infty}$? Are there any more detailed characterization of the existence of the M-estimator under this border line scenario? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your suggestion regarding the introduction. The single index model is a flexible yet interpretable framework for modeling nonlinear relationships while avoiding the curse of dimensionality. Single models are useful because it is a weak assumption regarding the modeling of $y_i\mid x_i$. It only assumes the existence of a ground truth $\beta^*\in \mathbb R^p$ (or $w$ in our paper), and that the response $y_i$ only depends on $x_i$ through the inner product with the ground truth $\beta^*$. But makes no assumption on how $y_i$ depends on $x_i^T\beta^*$, in particular any nonlinearity is allowed. For machine learning in proportional asymptotics more specifically, single index models (or more generally multi-index models) allow to characterize the limiting behavior of estimators by a low-dimensional system of equations, for example equation (2) in the submission. Techniques in the submission push the understanding of these systems of equations (sometimes referred to as Gaussian Equivalent Model) that arise from CGMT (Thrampoulidis et al, 2015) or Approximate Message Passing analysis. Without the single-index (or multi-index) model assumption, for instance if $y_i$ depends on $x_i$ through a growing number of dimensions as $n,p\to+\infty$, we are not aware of an established Gaussian Equivalent Model. **Regading what happens if $p/n\to\delta$ for $\delta=\delta_\infty$**: It is known that phase transitions arising from the convex geometry arguments of Amelunxen et al (2013) have a $1/sqrt n$ width of uncertainty. More concretely, the top of page 5 https://arxiv.org/abs/1804.09753 (Sur and Candes, 2018) explains this as follows: for any sequence $\lambda_n\to+\infty$, - if $p/n > \delta_\infty^{-1} + \lambda_n /\sqrt n$ then $P(\text{M-estimator exists})\to 0$, - if $p/n < \delta_\infty^{-1} - \lambda_n /\sqrt n$ then $P(\text{M-estimator exists})\to 1$. So it is possible to be more precise and study what happens near the threshold as long as $p/n$ stays a little more than $1/\sqrt n$ away from the critical treshold. But this phenomenon was known since Amelunxen et al (2013) and Sur and Candes (2018), so we did not emphasize that in the submission. We will add a remark explaining this, including the above pointers, in the camera-ready version.
Summary: This paper investigates the existence of solutions to the nonlinear system of equations that characterize the asymptotic behavior of the M-estimator. Notably, the existence of a solution for $\delta>\delta_{\infty}$ remains largely unproven when the assumption of independence between $x_i$ and $y_i$ is removed in binary logistic regression. The authors extend the existing theory in two key directions: (1) they generalize the results from binary logistic regression to a broader class of single-index models (strictly convex loss), and (2) they analyze the problem beyond the global null case, providing a necessary and sufficient condition that ensures the system admits a solution. This result effectively addresses a gap in the existing literature. Claims And Evidence: Claims are well supported. Methods And Evaluation Criteria: As this paper generalizes the results of Candès & Sur (2020) from binary logistic regression to a broader class of single-index models, the authors provide numerical validation of their methods by generating data from a Binomial distribution, which can be easily transformed into binary logistic regression by setting $q=1$. Their method was verified and shown to be consistent with the results of Candès & Sur (2020). Theoretical Claims: The main proof strategy was presented in the main text. And I briefly read the proof in the supplementary. The theoretical claims look solid to me though I didn't verify every single piece of the details in the proof. Experimental Designs Or Analyses: The experiments are straightforward and effectively validate the soundness of the results. Supplementary Material: I reviewed Supplementary A on the derivation of threshold from convex geometry only. Relation To Broader Scientific Literature: Candès and Sur (2020) primarily examine phase transitions in high-dimensional binary logistic regression models. A key contribution of this paper is its generalization of their results from binary logistic regression to a broader class of single-index models. Additionally, this work advances the theory beyond the global null, addressing a gap left by Sur and Candès (2019), which focused only on numerical solutions. Essential References Not Discussed: n/a Other Strengths And Weaknesses: I find this paper to be well-written and clearly articulated in its contributions. To the best of my knowledge, it is original in addressing a theoretical gap in the phase transition literature by extending the theory beyond the assumption of independence between $x_i$ and $y_i$, particularly beyond the global null setting. Other Comments Or Suggestions: n/a Questions For Authors: n/a Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the careful reading of the paper and the kind words. We will be happy to provide clarifications if needed in later discussions.
Summary: This paper studies phase transitions for the existence of unregularized M-estimators in single-index models under proportional asymptotics, where the sample size n and feature dimension p grow proportionally with n/p → δ ∈ (1, ∞). The authors generalize results previously established for binary logistic regression by Candes & Sur (2020) to more general loss functions and single-index models. They derive an explicit critical threshold δ∞ governing the phase transition for the existence of the M-estimator and rigorously prove that the corresponding nonlinear system admits a solution if and only if δ > δ∞. The work relies on convex geometry, the Gaussian kinematic formula, and infinite-dimensional optimization techniques to establish these results. Claims And Evidence: These are theorems, with rigorous proofs. Methods And Evaluation Criteria: N/A Theoretical Claims: These are theorems, with rigorous proofs. The mathematical derivations are elegant and rigorous. The results clarify and extend known phase transition phenomena in M-estimation. Experimental Designs Or Analyses: N/A Supplementary Material: i briefly went into the proofs to understand how they work. Relation To Broader Scientific Literature: The paper does a good job covering prior work in high-dimensional inference, particularly the literature stemming from Candes & Sur (2020) and related works in convex optimization. Essential References Not Discussed: The author however seems to omit some of the early foundational studies in information theory and statistical physics, which introduced similar phase transitions long before the modern convex-geometry-based approaches. In particular, the classical results of Cover (1965) on the geometry of linear inequalities and Gardner & Derrida (1988) on the storage capacity of neural networks are directly related to the question of M-estimator feasibility in high dimensions. Moreover, Krauth & Mézard (1987) already presented a phase diagram remarkably similar to the one studied here, predating modern convex-analytic approaches by decades. It would be valuable for the authors to acknowledge these contributions and place their results within this broader historical context. In more recent year, i can think of few generalization of Surr results, in e.g. "Mignacco, F., Krzakala, F., Lu, Y., Urbani, P., & Zdeborova, L. (2020, November). The role of regularization in classification of high-dimensional noisy gaussian mixture. In International conference on machine learning (pp. 6874-6883). PMLR." which might be covered by the author theorem, or the random feature version of separbality in Gerace, F., Loureiro, B., Krzakala, F., Mézard, M., & Zdeborová, L. (2020, November). Generalisation error in learning with random features and the hidden manifold model. In International Conference on Machine Learning (pp. 3452-3462). PMLR. Other Strengths And Weaknesses: Overall, while the results are rigorous and mathematically insightful, their practical significance for ICML could be debated.After all the main contribution is a proof a known and accepted results. While this is a welcome addition, it is not clear that ICML is the best venue. I nevertheless support publication. Other Comments Or Suggestions: See above Questions For Authors: See above Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for highlighting the importance of earlier foundational work from the information theory and statistical physics communities. We agree that the phase transition phenomena studied in our paper are connected in spirit to classical results such as Cover (1965) on the geometry of linear inequalities, which leads to the threshold $\delta_\infty=2$ when the response $y_i$ follows the Bernoulli (1/2) and feature $x_i$ are independent from $y_i$. We also appreciate the pointer to earlier works in statistical physics: Gardner & Derrida (1988), Krauth & Mézard (1987). We will revise the manuscript to include the historical overview, explicitly acknowledging these contributions. We also thank the reviewer for drawing our attention to more recent developments, including Mignacco et al. (2020) and Gerace et al. (2020). While our focus is on the existence of unregularized M-estimators for single-index models with Gaussian design, these papers explore related questions and offer complementary perspectives. The threshold in equation (27) Mignacco et al. (2020) is not directly recovered by our results, since we assume here $x_i\sim N(0, I_p)$ which is not satisfied in the Gaussian mixture setting of Mignacco et al. (2020). However, the techniques presented in the submission can be used to prove that the system in Mignacco et al. (2020) has a solution, and we will add a remark to that effect in the camera-ready version. Regarding whether machine learning conferences such as ICML are suitable, we believe ICML and similar venues strive to present mathematical results with no gaps. The existence of solutions to the system of equations governing estimators in proportional asymptotics has been typically assumed in the theorems of the corresponding publications, for instance the CGMT paper (Thrampoulidis et al. 2015), Sur and Candes (2018) or Salehi et al. (2019), and several others assume in the theorems that the system of equations of interest admit a unique solution. Existence of solutions to the system can be easily deduced under a strong convexity assumption (e.g., with an additional L2 penalty $\lambda\|\cdot\|_2^2$), but taking the the limit $\lambda\to 0$ as in Appendix B.3 of https://arxiv.org/abs/2201.13383 to deduce results from the strongly convex case $(\lambda>0)$ to the $\lambda=0$ case by continuity requires that the low-dimensional system corresponding to $\lambda=0$ has a unique solution. Thus our method to prove that the system has a unique solution complements these works, and in a sense fills the gap and completes the argument. Thanks again for the careful reading and pointers to the missing references.
null
null
null
null
null
null
Do Bayesian Neural Networks Actually Behave Like Bayesian Models?
Accept (poster)
Summary: The paper **"Do Bayesian Neural Networks Actually Behave Like Bayesian Models?"** investigates whether common **approximate inference algorithms** for Bayesian Neural Networks (BNNs) adhere to the theoretical principles of Bayesian belief updating. It empirically evaluates methods such as **Variational Inference (VI), Laplace Approximation, SWAG, and Stochastic Gradient Langevin Dynamics (SGLD)** on synthetic regression and real-world classification tasks. The key findings indicate that: ### **1. Common BNN Algorithms Lack Functional Variability** Let $ \theta_s, s=1,\ldots, S $ be a collection of posterior samples from a common BNN algorithm such as Laplace, VI, HMC, etc. We can visualize the function draws $ f_{\theta_s}(x) $ in simple regression settings (e.g., scalar $ x $ and $ y $) by plotting them. Section 4.1 presents evidence that many common BNN methods fail to capture **meaningful uncertainty**, particularly in regions of $ x $ with **few observations**. This suggests that approximate BNN posteriors tend to **over-collapse**, producing function samples that lack diversity and fail to reflect the full range of plausible models. This phenomenon is particularly problematic in **extrapolation regions**, where true Bayesian inference should exhibit greater uncertainty. ### **2. Common BNN Algorithms Forget Past Information in Sequential Learning** In sequential Bayesian updating, the posterior after seeing two datasets sequentially should match the single-step posterior: $$ \pi(\theta | D_1) \quad \text{then updating with} \quad \pi(\theta | D_2) \quad \text{should yield} \quad \pi(\theta | D_1 \cup D_2) $$ Instead, common approximate inference methods exhibit **forgetting** as shown in Section 5. ### **3. Common BNN Algorithms Violate Bayesian Predictive Coherence** Again let $ \theta_s, s=1,\ldots, S $ be a collection of posterior samples from a common BNN algorithm such as Laplace, VI, HMC, etc. We can use this collection to form a predictive density with which we impute future data. Then on this augmented dataset (original plus imputed) we can reapply the same BNN algorithm to get a new posterior sample. Repeating this imputation $ N $ times, we can get $ N $ new posterior samples. These new samples should remain consistent with the original ones in the sense that metrics such as test accuracy, log-likelihood, and expected calibration error (ECE) should be preserved. However, Section 6 presents evidence that these metrics shift significantly after imputation, which the authors consider to be evidence for predictive incoherence. ## update after rebuttal I increased my score from 3 to 4 and maintain a positive impression of the paper. The rebuttal was handled well by the authors. I am not convinced by the review that gave a 1 and have decided to increase my score to help the paper's chance of acceptance. Claims And Evidence: Yes. The main claims—that common BNN algorithms fail to behave in a strictly Bayesian manner—are supported by well-designed empirical analysis. The claim that martingale posteriors could be a worthwhile pursuit could be further substantiated however. Methods And Evaluation Criteria: There are multiple ways in which BNN algorithms may fail to be strictly Bayesian. This paper examines a subset of these properties, focusing on functional variability, forgetting past data, and predictive coherence. The choices are reasonable, as they directly impact certain applications. While the experiments use only synthetic data for regression and CIFAR-10 for classification, this is sufficient to support the claims, as the findings are expected to generalize. Theoretical Claims: The paper primarily focuses on empirical analysis, with limited theoretical contributions. Proposition 6.1 formalizes the predictive coherence of Bayesian inference using martingale posteriors. The proof appears correct. Experimental Designs Or Analyses: The experiments span synthetic regression and CIFAR-10 classification to evaluate whether common BNN algorithms exhibit Bayesian properties. They convincingly demonstrate that approximate inference algorithms fail certain Bayesian properties. The empirical design is sound, and additional experiments would likely reinforce rather than alter the conclusions. However, further experiments for the martingale posterior could strengthen the paper’s concluding claim that MPs are worthy to pursue in Bayesian deep learning. Supplementary Material: The supplementary material is well-structured and helpful. Appendix A clearly lays out Bayesian model averaging (BMA) and its estimation, which is often underexplained in Bayesian deep learning papers. Appendix B provides a useful summary of approximate BNN algorithms, making the paper more self-contained. However, I was surprised that martingale posteriors (MP) were not discussed in detail in the appendices. Given that MPs are relatively new and originally published in a statistics journal, a more thorough explanation would improve accessibility for ICML readers. Relation To Broader Scientific Literature: The paper contributes to ongoing discussions in Bayesian deep learning, where much of the focus has been on improving inference algorithms for better predictive performance and uncertainty quantification. While prior works have acknowledged that approximate Bayesian inference does not strictly adhere to Bayesian principles, this paper systematically documents key deviations. The use of martingale posteriors as a potential fix is intriguing but underdeveloped. It would be valuable to compare this to the broader literature on alternative Bayesian approximations and uncertainty estimation techniques. Essential References Not Discussed: I did not identify any crucial references missing from the discussion. The paper effectively situates its contributions within the existing Bayesian deep learning literature. Other Strengths And Weaknesses: I found this paper well-argued and engaging. It articulates a viewpoint that aligns with my own perspective—that we should not expect BNNs to be strictly Bayesian but instead view them as practical probabilistic models. However, while the paper critiques existing BNN methods effectively, the proposed alternative (martingale posteriors) is presented in a somewhat preliminary manner. The conditions required for the martingale posterior to be well-defined are not fully discussed, and it remains unclear whether it truly resolves the issues identified with standard BNN algorithms. Other Comments Or Suggestions: ### Minor comments: - This paper honestly reads more like a position piece than a purely technical contribution. The only technical contribution is the MP part which is far too brief and preliminary. - Figure 8 caption: "Their martingale posterior over-reduces confidence"—the phrase "over-reduces" is awkward. Consider rewording for clarity. - Line 352: "a belief updating scheme $ P(\theta) \mapsto P_y(\theta) $." Since this convention is only used in the context of $ \pi $, why not define it directly in terms of $ \pi $ rather than $ P $? - The notation for the equation in Line 411 confuses me. On the left-hand side, I don’t understand why there is a hat on $ \pi $. - Would it be possible to explicitly write out reason for the last equality in the proof of Proposition 6.1? - In the abstract, the key finding is written as “common BNN algorithms…fail to update in a consistent manner, forget about old data under sequential updates, and violate the predictive coherence properties…” Then later, starting around the last sentence on page 1, it is written that these common BNN algorithms 1) “lack the functional consistency of the Bayesian posterior”, 2) “sequential inference using BNNs fails to propagate information coherently” and 3) “BNN predictive updates are not well-calibrated”. Are these two sets of claims meant to match up one to one and in that order? I fail to see how predictive coherence is the same as well-calibration. Also, I don’t understand the term “functional consistency.” Reading 4.1, it seems that common BNN algorithms lack the true functional “variability”. - I find the notation used in the CIFAR-10 experiment a bit confusing. Is $ x^* $ a single unlabeled image or a set of unlabeled images? Questions For Authors: 1. The introduction states that BNNs “offer better predictive performance…than deterministic networks in multiple settings.” How do these claims hold up against deep ensembles? 2. The martingale posterior is defined without any conditions on the belief updating scheme. Is this truly appropriate? A belief update could take many forms—does this framework require specific assumptions to guarantee desirable properties? 3. In the CIFAR-10 experiment, why does using the intermediary \( \pi’ \)-predictive to impute labels lead to a well-defined martingale posterior? 4. I’m not sure my reading of the CIFAR-10 experiment is correct. Could you please check my summary of your third claim under Summary and correct any misconceptions I have? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your careful review and helpful feedback! We are glad you found our paper “engaging” and “well argued”. We hope our responses and new experiments below answer your questions and alleviate the concerns you raised. > The proposed alternative (martingale posteriors) is presented in a somewhat preliminary manner ...it remains unclear whether it truly resolves the issues identified with standard BNN algorithms. While we agree that the paper could be improved by expanding on martingale posterior (MP) evaluations and will present new results on this shortly, we first wanted to discuss its role within our overall contributions. Our primary aim, as you eloquently explain, is to investigate the adherence of BNNs to the theoretical principles of Bayesian updating. Predictive coherence (Section 6) is one of these key principles. The MP framework is a novel tool we propose for this evaluation, which then naturally leads to our experiments in Figures 7 and 8 intended to identify the lack of coherence in existing approaches. It is only after we have conducted these tests that the avenue of using MPs as a mechanism itself to potentially combat the problem becomes apparent. Given our original aims and the large amount of other content, we thus feel that fully exploring this new idea is not feasible within the scope of the current paper. Instead, we see this as a very exciting avenue for future investigation, adding significance to our work. Nonetheless, we do plan to significantly expand on our discussion of the MP using the extra page in the camera ready (see also response to Reviewer 8qCC) and an exposition in the appendix for ML audiences, as you very rightly suggest. > The claim that [MPs] could be a worthwhile pursuit could be further substantiated …, further experiments for the [MP] could strengthen the paper’s concluding claim that MPs are worthy to pursue in Bayesian deep learning… To provide more evidence for the generality of the MP findings, __we have repeated our CIFAR experiments on the IMDB dataset__: [see here](https://raw.githubusercontent.com/anon17438/anon-rebuttal-material/main/rebuttal_martingale_posterior_imdb.pdf). The VI results are almost identical, strengthening the argument that the MP can provide performance gains in this setting. For SGLD, while the performance comparison of the MP is inconclusive, we still observe significant incoherence in uncertainty calibration, which this experiment was originally set up to test. > It would be valuable to compare [MPs] to the broader literature on alternative Bayesian approximations and uncertainty estimation techniques. Thank you for raising this. We will add more discussions and references to the broader literature in Section 3 including non-standard “generalized” updates to Bayes’ rule (we assume this is what you mean by “alternative Bayesian approximations"), Gibbs posteriors, PAC-Bayesian methods, and non-Bayesian approaches like epistemic neural networks. Please let us know anything else you think would be important to discuss here. ## Questions For Authors > 1. The introduction states that BNNs “offer better predictive performance…than deterministic networks in multiple settings.” How do these claims hold up against deep ensembles? Deep ensembles (DE) have also indeed been shown to offer better performance than deterministic networks, and we actively avoid taking a position on the relative superiority of BNNs and DE. Assessing non-Bayesian uncertainty quantification methods like DE under our Bayesian lens would certainly be an interesting future investigation. > 2. The [MP] is defined without any conditions on the belief updating scheme … does this framework require specific assumptions to guarantee desirable properties? We refer back to our initial discussion that the desirable property of MPs for the purpose of our experiment is to recover the initial posterior. Typically, they will not do so under approximate inference. > 3. In the CIFAR-10 experiment, why does using the intermediary $\pi’$-predictive to impute labels lead to a well-defined [MP]? The regularity assumptions needed for well-defined MPs are generally weak, requiring that the prior and updated belief densities exist, and BMA has finite expectation, clearly satisfied for classification where it is bounded. We will expand on this. > 4. I’m not sure my reading of the CIFAR-10 experiment is correct. The summary is correct. We exploit the property that sampling from $\Theta\sim\pi(\theta|y)$ is equivalent to imputing a random observation $Y'\sim p(y’|y)$ and then drawing the posterior sample $\Theta\sim\pi(\theta|y,y')$. Our experiments find that the samples obtained this way deviate from the original distribution in their BMA, as captured by our evaluation metrics. We apologize we do not have the space to address all minor comments, but we will incorporate them into the paper. Again, thank you for the review. Please let us know of any further questions or concerns.
Summary: This paper investigates the properties of several popular algorithms for approximate posterior inference in Bayesian neural networks (BNNs). The main experimental findings are that common approximate inference algorithms in BNNs: (a) do not exhibit functional consistency of posteriors, (b) do not propgate information coherently when performing sequential inference, and (c) do not exhibit predictive coherence. Claims And Evidence: Each of the three experimental findings are supported by evidence. Methods And Evaluation Criteria: There are a mix of toy and larger-scale experiments, which is sensible to investigate the questions being asked. Theoretical Claims: There are no novel theoretical claims made in the paper. Experimental Designs Or Analyses: The experimental analyses seem reasonable, although I did not fully understand the martingale posterior experiment. Supplementary Material: No. Relation To Broader Scientific Literature: The contributions are positioned to explain some folklore knowledge within the Bayesian deep learning community. It is well known that Bayesian methods are sound for models with small numbers of parameters. However, direct application of Bayesian methods to high-dimensional deep neural networks is fraught with practical issues. These issues are known well enough by the community to be explored in the NeurIPS workshop "I Can't Believe It's Not Better (ICBINB)". This paper provides a more systematic study of some of these issues, and offers concrete evidence of the shortcomings of current approximate posterior inference methods. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: I have already described above the main strength, which is to shed light on some problematic issues with approximate posterior inference in BNNs. Soundness and support for claims are also discussed above, and the writing is largely quite coherent. There are also few weaknesses that deserve mention: - In a set of experiments like this, it would be nice to have a small version where exact inference is possible. This would help clarify which effects are due to HMC being treated as the gold standard and what is inherent to Bayesian inference itself. - The paper demonstrates several issues with approximate inference algorithms for BNNs but does not attempt to provide theoretical understanding of why these issues occur or what can be done to remedy them. - Discussion about key takeaways is also relatively light: should practitioners avoid BNNs completely? Should they be used only in certain contexts? Is more research needed to develop inference schemes that respect e.g. coherence in sequential inference? Other Comments Or Suggestions: - p. 2, ll. 58-62: Section 5.2 is listed twice. - p. 3, l. 145 (second column): What does BMA stand for? - p. 3, l. 151 (second column): It may help the flow to discuss applications of BNNs earlier, in Section 2. - p. 3, l. 156: Are the sizes listed correctly here? 26 seems like a typo. - p. 4, l. 190: Has $\beta$ been defined at this point in the paper? Questions For Authors: In addition to the weaknesses raised above: are there any cases where the uncovered pathologies are actually beneficial? One person's incorrect posterior is another person's correct posterior with a different prior. Might there be a case where narrowing down the functions as done by VI, Laplace, and SWAG in Figure 1 can actually help? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your careful review and helpful feedback! We hope that the responses and new results below answer your questions and alleviate any remaining concerns. > In a set of experiments like this, it would be nice to have a small version where exact inference is possible. This would help clarify which effects are due to HMC being treated as the gold standard and what is inherent to Bayesian inference itself. Great suggestion! To provide this, __we have performed an extra experiment with Bayesian linear regression__: [see here](https://raw.githubusercontent.com/anon17438/anon-rebuttal-material/main/rebuttal_exact_inference_experiment.pdf). To keep higher input dimensionality, we expand $x$ into 20 random Fourier features with a squared exponential kernel, approximating a (sinusoidal) Gaussian process regression. Exact inference demonstrates the desired coherent behavior. MFVI updating still shows coherence, despite its apparent lack of faithful posterior representation. For completeness, we have run all algorithms in the single-step setting and confirmed that HMC matches the true posterior. For this classical Bayesian setup, even our inexact approximate inference schemes exhibit Bayesian properties. This contrasts the BNN behavior we find at scale. We will expand on this running example to demonstrate the expected behavior with exact Bayesian updating through the paper. > The paper demonstrates several issues with approximate inference algorithms for BNNs but does not attempt to provide theoretical understanding of why these issues occur or what can be done to remedy them. The primary objective of our paper is to highlight the discrepancy between the theoretical expectations and the actual behavior of BNNs. We thus believe it is natural that we focus on experimental evaluation rather than new theory. The theoretical reason for the issues is arguably quite straightforward: inexact inference causes substantial discrepancy in behavior from the true posterior. One could try to do theoretical analysis on this gap itself, but it would likely be challenging and prone to the same theory-practice gap we are investigating in the first place; we feel this is beyond the scope of what can be achieved in the current work. In terms of remedies, these extremely challenging and fundamental issues at the core of the BNN methodology cannot feasibly be solved in a single paper. We see our work as the first step on this journey, highlighting and carefully documenting the issues, but we expect it to be many years before the community has managed to remedy them, if indeed they ever do. > Discussion about key takeaways is also relatively light… Thank you for this feedback. We will exploit the extra page available for the camera ready to add more discussion on key takeaways. On the specific questions you raise: > Should practitioners avoid BNNs completely? Should they only be used in certain settings? Definitely not! They have been shown to outperform point estimates and provide effective static uncertainty in multiple settings, which our results do not undermine. We simply argue that care is needed in our expectations and interpretation, especially outside these prediction settings where the pathologies we discuss might occur. In particular, we need to be very careful about empirical evaluation, as good static performance may not translate to other desired behaviors. > Is more research needed to develop inference schemes that respect e.g. coherence in sequential inference? We would indeed argue that more research is needed in this direction. Moreover, in our opinion this should not be thought of as simply an inference scheme problem: entirely not Bayesian methods might in practice give more coherence under sequential inference. Even though the problems we raised are a direct result of inexact inference, improving inference schemes is not necessarily the solution: accurate inference may be unachievable, and strategies focusing on final behavior could be more effective. In short, this remains a highly unsolved problem with many possible future research directions. Other questions we will discuss are _Why does it matter that BNNs are not truly Bayesian?_ and _If BNNs are not Bayesian, what are they?_. Let us know if there are any other suggestions. > p. 3, l. 156: Are the sizes listed correctly here? 26 seems like a typo. Well caught! Indeed this should say 256. > Are there any cases where the uncovered pathologies are actually beneficial? There is never likely to be any direct benefit in the inconsistent behavior itself, but the deviations from the “correct” posterior may potentially sometimes be beneficial from a static prediction perspective. For instance, [1] found the calibration of VI to be consistently better than more faithful Bayesian approximations such as HMC. Thank you again for your review and helpful suggestions! [1] Izmailov et. al. (2021). _What are Bayesian Neural Network posteriors really like?_ ICML --- Rebuttal Comment 1.1: Comment: Thank you for your response and experiments. I do not have any further questions.
Summary: The paper investigates the alignment of Bayesian neural networks wrt rigorous Bayesian principles/ideals. To do so, tasks like synthetic regression and classification on CIFAR datasets are considered. The main claimed findings are focused on 1) the lack of "functional consistency" shown by approximate posteriors (i.e. MFVI or Laplace approximations for BNNs), 2) the issues to sequentially propagate uncertainty in a rigorous way and 3) the ill-calibrated uncertainty of the posterior predictive density models. Claims And Evidence: The vast majority of claims and ideas introduced are well-supported with evidence from empirical results, technical derivations or well-referenced context. However, I would like to raise some points here that I thought could be imprecise or just odd from my point of view: - I don't really understand why references on Bayesian inference (for example @ L85) are focused on Robins et al. and other citations of this sort. Weren't the Bayesian ideals mentioned a contribution of other authors and papers produced long before those ones? - The BMA in Appendix A looks a bit trivial to me, or at least another way of technically rewriting standard Bayesian quantities over probabilistic densities. I am aware of the existence of BMA and its use, but here, it didn't give me the right feeling. I could be wrong on this, but that was what came to my mind when reading it.. Methods And Evaluation Criteria: Similar thoughts as in the previous section on claims and evidence. I do think the paper is on a good direction, exploring a super interesting topic (which, more or less, many in the probabilistic/Bayesian community already suspected for a long long time). The methods are in general well developed, thorough and supported by good evaluation criteria, all mixed with a high scientific spirit imo. Some additional comments and concerns from the methodology part: - Why in Section 4, does it come back to the functional posterior predictive perspective? I don't see the point/utility here. - I am concerned about the assumption of heteroscedastic noise modelling in Sec. 4 being modelled with a $\sigma^{2}_{\theta}(x)$ NN. I think there is more or less a consensus in the probabilistic ML community that this is not a great idea, as it fails to really produce well-calibrated uncertainties out of the regions with high density of datapoints. (See for instance Detlefsen et al. NeurIPS 2019). Therefore, I'm a bit afraid that the use of such networks is somehow deteriorating the correctness of conclusions obtained from the empirical results there.. --- Detlefsen et al. NeurIPS (2019) / "Reliable training and estimation of variance networks" Theoretical Claims: Just one question on the Martingale assumption from Theorem 6.1, used also later in Proposition 6.1. How difficult/easy is satisfying that $\theta_{N}$ is a martingale? I don't really have a feeling here from the NN parameters of different architectures. Could the authors add a bit more on this? Experimental Designs Or Analyses: I do like the way experiments support the main 3 points/parts of the findings, that's great! However, to me the most interesting one of the three parts is for sure the last one from Section 6. To me, that becomes somehow unclear due to the lack of space. The analysis for Fig 6 and 7 is super important, but results are presented in a quick manner with many details poorly discussed... The way the experiment is designed looks correct to me, but for instance, the intermediary $\pi'$-predictive densities used to sample are also unclear (or I don't have a taste on how to obtain these in a reproducible way), and how the empirical MP is computed in L412 also doesn't give me a good feeling if applied to different BNN models. Supplementary Material: I did not proofread the Appendix in depth, just read some initial parts to support my understanding during my reading. Relation To Broader Scientific Literature: Apart from my comments made in the first sections, I don't miss anything special here or at least I'm not aware of missing literature/references. The relation and connection with the ideas discussed around in the BNN community are also right to me. Essential References Not Discussed: Idem to prev. review section Other Strengths And Weaknesses: NC Other Comments Or Suggestions: NC Questions For Authors: I do think the paper is in a great direction and it has key values to be worth for acceptance. Due to I still have my doubts and concerns about some parts and points, I do recommend weak acceptance at this point, but I would be also happy to vote for clear acceptance of the manuscript if the rest of the reviewers agree and there are no missing problems that I accidentally ignored. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful review and helpful feedback! We are glad you found our paper interesting, and hope that the responses and new results presented below help increase your confidence in backing acceptance of the paper. > I don't really understand why references on Bayesian inference […] are focused on Robins et al. and other citations of this sort [...] Thank you for pointing this out. We agree and will update the references to some of the originals (e.g. [1,2]). > The BMA in Appendix A looks a bit trivial to me [...] We appreciate this appendix indeed presents background that might appear elementary. It introduces terminology and makes our work accessible by detailing common concepts “often underexplained in Bayesian deep learning papers”, quoting reviewer KcGY. We will add a paragraph to better position this appendix. Thanks for highlighting it! > Why in Section 4, does it come back to the functional posterior predictive perspective? I don't see the point/utility here. We think this is worth reiterating here. Pointwise uncertainty quantification (UQ) and BNNs improving UQ over SGD are of course well studied; the novel investigation here is to evaluate BNNs in dynamic settings. Indeed, VI, for example, produces good static results and uncertainty but its functional and uncertainty calibration properties deviate from Bayesian behaviors. > I am concerned about the assumption of heteroscedastic noise modelling in Sec. 4 [...] I'm a bit afraid that the use of such networks is somehow deteriorating the correctness of conclusions obtained from the empirical results there. Thank you for pointing this out! It is true that the choice of observation model may have a significant impact on the performance and behavior of Bayesian models, and agree this is important to investigate. We highlight that there are relevant ablations already in our appendix (in Appendix F we examine model misspecification and in Appendix D we do a number of ablations on network architecture). To show, however, that our empirical findings are not merely an artefact of the difficulty in heteroskedastic noise calibration, __we have repeated our experiment from Figure 3 with homoskedastic noise model__: [see here](https://raw.githubusercontent.com/anon17438/anon-rebuttal-material/main/rebuttal_homoscedastic_figure_3.pdf). We run inference with both a fixed noise variance, and placing an inverse Gamma hyper-prior on it for a fully Bayesian treatment. In both cases, the results agree with the conclusions of Figure 3. > How difficult/easy is satisfying that $\theta_N$ is a martingale? $\theta_N$ is always a martingale under proper Bayesian updating regardless of architecture (subject to some very weak assumptions about the posterior existing). It will typically no longer be a perfect martingale under approximate updates, which is why we are using this as a test for adherence to Bayesian principles. We will add more high level discussion on this to the section to make it clearer. >...[Section 6] becomes somehow unclear due to the lack of space. The analysis for Fig 6 and 7 is super important, but results are presented in a quick manner with many details poorly discussed… Thank you for this important feedback. We do agree the clarity has suffered here from needing to get things in length. We plan to use most of the extra page available for the camera ready to expand on and improve the clarity of this section. In particular, we plan to add more high-level intuition and explanation detail throughout, further interpret the results, and add more insights on the MP behavior itself. We have also repeated the experiments from Figures 7 and 8 with additional dataset, as discussed in the response to Reviewer KcGY. Please let us know if there is anything further you feel we should add. > The way the experiment is designed looks correct to me, but for instance, the intermediary pi’-predictive densities used to sample are also unclear (or I don't have a taste on how to obtain these in a reproducible way), and how the empirical MP is computed in L412 also doesn't give me a good feeling if applied to different BNN models. The $\pi’$ distributions correspond to an initial trained BNN posterior. Our imputation follows the predictive resampling step from the original MP paper. We first draw a random setting of weights from the posterior $\Theta\sim\pi’(\theta)$, then generate a label from the predicted class probabilities $f_\Theta(x^*)$ in the case of classification (but it generalizes). For reproducibility, in our experiments we have taken sequentially increasing seeds. We will make updates to ensure this is clear. Thank you again for your insightful review which has really helped us to improve the paper. Let us know if you have any further questions or concerns. ----------- [1] Jeffreys, H. (1939). _Theory of Probability_. Clarendon Press. [2] Savage, L. J. (1972). _The foundations of statistics_. Courier Corporation.
Summary: The paper empirically investigates how well popular approximate inference algorithms for BNNs respect the theoretical properties of Bayesian belief updating. The study tries to examine whether different Bayesian neural network (BNN) posterior approximations adequately capture epistemic uncertainty by analyzing their behavior in both parameter-space i.e. weights and function-space i.e. predictive distributions. Experiments reveal HMC preserves richer functional diversity in high-uncertainty zones, while variational and other approximate methods oversimplify predictions despite training data constraints. The work is good summary of the insights into the Bayesian inference algorithms. But the studies presented here are broadly understood by the community. The tradeoffs of the different approximate Bayesian inference approaches are well studied. Lacks novelty. Claims And Evidence: - Methods And Evaluation Criteria: - Theoretical Claims: - Experimental Designs Or Analyses: - Supplementary Material: Provides summary of different Bayesian inference algroithms. Relation To Broader Scientific Literature: These findings are broadly understood. Lacks novelty. Essential References Not Discussed: - Other Strengths And Weaknesses: - Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your work in reviewing our paper. However, we believe there may have been some significant misunderstandings about our work and we are a little puzzled by both your summary of our work and the conclusions of your review, neither of which reflect our actual contributions. For instance, the review claims the paper studies whether “different BNN posterior approximations adequately capture epistemic uncertainty by analyzing their behavior in both parameter-space i.e. weights and function-space i.e. predictive distributions”. However, our work is not about comparing how different posterior approximations capture epistemic uncertainty and we do not have any analysis or experimentation at all about parameter-space behavior. The tradeoff between various approximate inference techniques in terms of their static predictive performance is indeed already well understood, as we acknowledge in the paper itself. As explained in, for example, the penultimate paragraph of our introduction, our work is instead about critically examining how well these BNN algorithms adhere to various important theoretical principles of Bayesian belief updating in dynamic settings, which are not in themselves often used to evaluate inference schemes. This goes beyond simply assessing predictive performance or understanding computational and expressivity trade-offs between different approximate inference schemes. The work shifts the focus to systematically examining the behavior of Bayesian updates, looking at functional consistency, coherency of sequential belief updates, and the self-consistency of predictive uncertainty. Other reviewers recognize this perspective as significant empirical contributions not yet widely established within the community. We would therefore like to ask you to please reconsider your assessment.
null
null
null
null
null
null
Orient Anything: Learning Robust Object Orientation Estimation from Rendering 3D Models
Accept (poster)
Summary: This paper proposes orient anything, a foundation model for predicting object orientation from a single image in an zero-shot manner. The key of the paper is curating a large-scale dataset for the orientation estimation task, which is rendered from Objaverse including 2M images. The authors propose to deal with the ambiguity of low-dimensional poses by using probabilistic distribution over object azimuth, elevation/polar angle, and camera rotation. The proposed method is evaluated on both synthetic images (on orientation error) and real images (on recognition tasks). Claims And Evidence: The authors claimed this paper to be the first foundation model for object orientation estimation. While it seems interesting, I am worried about the evaluation protocol on real data, where the orientation error is not studied. Methods And Evaluation Criteria: The proposed methods include two parts: 1) data curation and 2) training an orientation prediction model. The data curation method includes filtering non-axis-aligned objects, identifying ambiguous symmetry objects, and rendering synthetic data. The orientation prediction model is based on a ViT encoder with several decoders for predicting the probabilistic distribution of rotation angles. The trained model is evaluated on both real and synthetic data. On synthetic data, it is evaluated mainly with orientation error; while on real data, due to the lack of pose annotations, it is evaluated with higher-level recognition tasks. Theoretical Claims: N/A Experimental Designs Or Analyses: I am not fully convinced with the evaluations, especially the lack of results on orientation error on real images. Showing the recognition task performance is not enough for convincing me that this is an foundation model. And I am not very sure how the method trained on synthetic data with white background can generalize to real and even in-the-wild images. Besides, one of the biggest contribution of this paper is the curated training data. However, there is no study on the dataset quality, even no visualization on how the dataset look like. Besides, I think it would be better if the authors perform an experiment on using different ratio of the curated dataset for training and explore whether there can be any scaling effect. Supplementary Material: Yes, the supplementary material includes category-level quantitative results, and extra visualization and examples. Relation To Broader Scientific Literature: The posed task can be useful for object 6D pose estimation and may have potential to benefit robotic applications. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Please see comments above. Other Comments Or Suggestions: N/A Questions For Authors: 1. Why not including quantitative results on real images? The authors claim that there are no data for performing evaluation. However, I don't think this is true. There are bunch of object pose estimation benchmarks that can be used for evaluation. Besides, the object 6D pose estimation works should also be discussed in details -- how the orientation estimation task is different, and why using no reference mesh/image is important. 2. Object orientation is not a well-defined task. The definition of orientation is closely related to the semantic meaning of the specifc object categories, i.e., the canonical pose space, where the objects are aligned as forward-facing. Thus, on novel categories, the orientation of objects are not well-defined. Thus, it is necessary to study how the model generalize to categories that are unseen during training, as the categories in objaverse is not guaranteed to include every category in real life. 3. The authors should also discuss why the model trained on synthetic data can generalize to real data. 3. Besides, the authors should study the quality of the curated dataset, and study the influence of training data scale. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable review and suggestions. Below we respond to the comments in **Weaknesses (W)** and **Questions (Q)**. --- ***W1.1 & Q1.1: Quantitative orientation error on real images.*** Currently, the 6 benchmarks in Table 2 are evaluated on real-world data with quantitative 3D orientation error. Moreover, we have also tested on the latest and largest object 3D orientation benchmark, ImageNet3D [1], which covers 200 object categories. The Acc@30° results are as follows: |Setting | Model | Avg. | Electronics | Furniture | Household | Music | Sports | Vehicles | Work | |---| --------------------------------- | ---- | ----------- | --------- | --------- | ----- | ------ | -------- | ---- | |Zero-shot| ImageNet3D-ResNet50 | 37.1 | 30.1 | 35.6 | 28.1 | 11.8 | **51.7** | 36.7 | **40.9** | || Orient Anything-B | **48.5** | **61.0** | **66.8** | **37.9** | **27.3** | 25.6 | **70.8** | 33.4 | |Fine-tuning| ImageNet3D-ResNet50 | 53.6 | 49.2 | 52.4 | 45.8 | 26.0 | 65.2 | 56.5 | 58.5 | || ImageNet3D-DINOv2-B | 64.0 | 75.3 | 47.9 | 32.9 | 23.5 | **74.7** | 38.1 | **64** | || Orient Anything-B | **71.3** | **77.6** | **89.7** | **64.4** | **54.4** | 47.6 | **87.4** | 61.2 | Note that we couldn't find detailed definitions for the major categories like *Electronics*, *Furniture*, and *Household* in ImageNet3D, so we used GPT to map its 200 categories into the 7 general categories. As a result, we used GPT to map its 200 categories into 7 broader ones. Therefore, the comparison results for each general category may vary, and the average score provides a more meaningful comparison. [1] Ma W, Zhang G, Liu Q, et al. Imagenet3d: Towards general-purpose object-level 3d understanding. NeurIPS 2024. --- ***W1.2: Generalization to real and even in-the-wild images.*** We provide numerous visualizations on real and in-the-wild images in both our main text and supplementary materials. Furthermore, in response to your W1.1 & Q1.1, we present the results on the current largest 3D orientation estimation benchmark, ImageNet3D, which also demonstrates strong generalization to these real-world scenarios. --- **W2.1 & Q4.1: Study on the dataset quality.** In <https://anonymous.4open.science/r/visualization-B728/Training_Samples/>, we provide lots of visualization cases from our curated training dataset, showcasing both high-quality images and accurate orientation annotations. --- ***W2.2 & Q4.2: Influence of training data.*** We train ViT-B version models using different ratios of data. Below are the results: | Data Ratio | COCO(Acc) | ImageNet3D(Acc@30°) | | ---------- | --------- | ------------------- | | 25% | 63.55 | 45.47 | | 50% | 65.47 | 44.08 | | 75% | 66.82 | 47.65 | | 100% | **69.85** | **48.52** | --- ***Q1.2 Difference and advantage to object 6D pose estimation.*** **Difference:** Traditional 6D pose estimation methods focus on relative orientation to a reference frame or template 3D model, while Orient Anything focuses on semantic orientation (e.g., the semantic “front face” of an object) without any reference. Therefore, the previous benchmark for object pose estimation is not suitable for our task. **Importance of no reference mesh/image:** Monocular images are the most accessible and widely used form of visual input, and in many scenarios, references for the desired object are often unavailable. By not relying on reference meshes or images, Orient Anything enables broader applications, such as solving spatial reasoning questions and evaluating whether the generated image adheres to the desired spatial relationships, as discussed in Section 7. --- ***Q2 Generalization to unseen categories.*** In response to your W1.1 & Q1.1, we further provide evaluation results on the current largest single-view orientation estimation benchmark, ImageNet3D, which significantly surpasses existing methods and demonstrates strong generalization to real images and various categories. Additionally, in response to Reviewer ev5K’s W1 & Q1, we discussed how to further scale the annotated data and expand the covered categories through synthetic 3D assets and voting-based annotation strategy. --- ***Q3 Why trained on synthetic data can generalized to real data.*** The synthetic-to-real generalization ability is mainly obtained through the task-agnostic pre-trained model that is trained on massive real images. The similar idea has been discussed and validated in Marigold [2] and Depth Anything V2 [3]. For further discussions, please refer to the response to Reviewer dtES’s W3. [2] Ke B, Obukhov A, Huang S, et al. Repurposing diffusion-based image generators for monocular depth estimation. CVPR 2024 [3] Yang L, Kang B, Huang Z, et al. Depth anything v2. NIPS 2024 --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal and extra evaluation results. The rebuttal have addressed all of my questions, and I have updated my score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your kind support! In the final revision, we will further enhance the paper by incorporating the additional experimental results and the valuable insights from the reviews. Thank you again!
Summary: The paper introduces Orient Anything, a foundational model designed for zero-shot estimation of object orientation from monocular images. Due to the scarcity of orientation annotations for open-world objects, the authors develop an automated 3D object orientation annotation pipeline that effectively utilizes the extensive resources of 3D models to create a diverse dataset of 2 million images with precise orientation labels. To enhance training stability and improve generalization, the authors introduce a robust training objective that models the 3D orientation as a probability distribution. Additionally, they propose several strategies to improve synthetic-to-real transfer, achieving state-of-the-art accuracy in orientation estimation for both rendered and real images. Experimental results demonstrate the superiority of the proposed method and highlight the significant potential of Orient Anything for high-level applications, such as enhancing spatial understanding and scoring spatial generation. Claims And Evidence: The claims made in the submission are well supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method holds significant value for a variety of applications, including understanding spatial information from images, verifying the spatial accuracy of generated images, and more. My main concern is the weak novelty of the method itself. Most of the designs are common in computer vision. Nevertheless, I think the contribution of this article in terms of dataset and foundation model is still meaningful. Theoretical Claims: I have check the correctness of equations(1-4) in section 5.1 and have no issues about them. Experimental Designs Or Analyses: The experimental design and analysis are generally sound. Table 2&3 show the superiority of the proposed method on both in-domain and out-of-domain datasets. Table 4&5 fully verify the key designs of the proposed method. Besides, I still have some questions: 1. In figures 13 and 14, it appears that the input spatial context provided for 'Orient Anything+LLM' already contains the answer to the question. This raises a question about whether it is valid to assess the LLM's response in this scenario. Does the improvement of 'Orient Anything+LLM' shown in Table 1 primarily rely on the accuracy of Orient Anything itself? 2. The example shown in figure2 is inappropriate. The author does not provide a clear coordinate reference for LLM, which makes the answer ambiguous. After trying the following two questions, GPT-4o got the correct answer: Q1: Does Falcon face to me? & Q2: So, in Falcon's view, where is Captain America. Supplementary Material: I reviewed the code provided in the supplementary material. Relation To Broader Scientific Literature: The model proposed in this paper can evaluate the spatial correctness of the results of the generative model. As discussed in "Towards Foundation Models for 3D Vision: How Close Are We?", state-of-the-art Vision-Language Models still perform poorly for spatial understanding, which is consistent with the findings of this paper. Essential References Not Discussed: Essential References have been well discussed in the paper. Other Strengths And Weaknesses: Strengths: 1. The ablation experiments in this paper are very sufficient. They fully verify the main design and contribution of this paper. 2. This paper is well-written and easy to follow. 3. The created large dataset is of great significance to the development of the community. Main weaknesses: 1. The experiments and analysis on LLMs are biased as discussed in 'Experimental Designs Or Analyses'. The authors should provide a more comprehensive comparison and analysis. For example, increasing the complexity of the question to weaken its connection with the given spatial context. 2. The use of 'Synthetic-to-Real Transferring' in the paper lacks rigor. Firstly, employing a better pre-trained model can enhance the model's performance on its own, which is not directly related to synthetic-to-real transfer. Secondly, the crop operation is a standard procedure in deep learning. 3. What is the speed and cost of the method? 4. Compared with the 6D pose estimation methods (e.g., FoundationPose), does the proposed method have more advantages in orientation estimation? Other Comments Or Suggestions: Can the method proposed in this paper be applied to video data? Additionally, is it capable of achieving stable orientation estimation results from video content? Questions For Authors: No more questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive review for recognizing the significance of our paper and invaluable suggestions. Below we respond to the comments in **Weaknesses (W)** and **Questions (Q)**. --- ***Q1: Improvement of "Orient Anything+LLM".*** "Orient Anything + LLM" is designed to demonstrate the accuracy of Orient Anything in real-world applications and highlight the importance of orientation perception for understanding spatial relationships. Since we use a text-based LLM that cannot perceive visual content, it relies on Orient Anything and the object detection model to understand spatial relationships in images. The improvements indeed come from the accuracy of these models. --- ***Q2: Example shown in Figure2.*** First, we conducted the test in November 2024, and subsequent version updates of GPT-4o may result in different results. Second, we believe the phrase "In Falcon's view (or perspective)" provides a coordinate reference, making both our question and your Q2 unambiguous. Adding a prefix question (e.g., "Does Falcon face me?") may explicitly prompt the model to transform the perspective first, which should be a natural reasoning step in a single Q&A. Breaking down a question into sub-questions natively lowers its difficulty, so comparing direct answers with manually split versions may not be entirely fair. --- ***W1: Increasing the complexity of the question.*** When constructing Ori-Bench, we created different subsets for distinct purposes. The basic "object direction" subset directly tasks the model with recognizing the orientation of individual objects, which aligns with the capabilities of Orient Anything. The more advanced "spatial part" subset requires understanding the pose of individual objects. Finally, the "spatial relation" subset involves open-domain questions that require further reasoning to solve. We believe these three progressive subsets offer a more comprehensive evaluation of a model's ability to address orientation-related questions. The most challenging "spatial relation" subset contains the most test samples, and improvements in this subset highlight the importance of understanding orientation when handling high-level spatial reasoning tasks. Thank you very much for your suggestion. In future updates, we will include more challenging questions and try to distill Orient Anything's knowledge into a VLM, thereby inherently integrating object orientation understanding capabilities. --- ***W2: Discussion about “Synthetic-to-Real Transferring”.*** Several works have discussed the relationship between pre-trained models and synthetic-to-real transfer. Marigold[1] transforms Stable Diffusion into a depth diffusion model using only synthetic data. Depth Anything v2[2] provides a more systematic analysis of how different pre-trained models impact "Synthetic-to-Real Transfer." Additionally, while the crop operation is a standard processing technique, in our case, it is specifically aligned with the goal of transferring from (complete) synthetic training objects to (often occluded) real objects. [1] Ke B, Obukhov A, Huang S, et al. Repurposing diffusion-based image generators for monocular depth estimation. CVPR 2024 [2] Yang L, Kang B, Huang Z, et al. Depth anything v2. NIPS 2024 --- ***W3: Speed and cost of Orient Anything.*** We only add four very lightweight MLP prediction heads (about 1M parameter) to the standard DINOv2 encoder. The parameter number remains nearly identical to the original DINOv2. | | DINOv2 Encoder | Project Heads | | ------------ | -------- | ------------ | | Orient-Anything-S | 22.06M | 1.17M | | Orient-Anything2-ViT-B | 86.58M | 1.51M | | Orient-Anything-ViT-L | 304.37M | 1.74M | --- ***W4: Difference and Advantage over 6D pose estimation method (e.g., FoundationPose).*** **Difference:** Traditional 6D pose estimation methods focus on relative orientation to a reference frame or template 3D model, while Orient Anything focuses on semantic orientation (e.g., the semantic “front face” of an object) without any reference. **Advantage:** Orient Anything does not require multi-view reference images or a ground-truth 3D model during inference. The predicted orientation is inherently linked to the object's semantic front, enabling broader applications, such as enhancing spatial understanding in VLMs and image generation. --- **Other: Extend to video.** Please refer to the response to Reviewer MxHL’s Q4. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal of authors which addressed most of my questions. Here, I still have some concerns: 1. The so-called contribution of 'Sythetic-to-Real Transferring' is over-claimed! As discussed by the authors, this idea has already been verified in previous works. 2. As shown in 'W2.2 & Q4.2: Influence of training data' of Reviewer 83jQ, the performance on ImageNet3D improves slightly as the amount of training data increases. This makes me worry about the room for improvement of the model's zero-shot performance. --- Reply to Comment 1.1.1: Comment: Thanks for your response and further comments! --- ***Concern 1: Over-claimed 'Sythetic-to-Real Transferring'.*** Our strategy for “Synthetic-to-Real Transferring” involves two key components: - Model initialization to inherit real-world knowledge, and - Data augmentation to narrow the domain gap. Regarding *model initialization*, we fully acknowledge that this idea has been explored in prior work, as stated in the second paragraph of Section 5.2. Besides, we explicitly clarify that we are *“evaluating this idea in our orientation estimation tasks”* in line 298. We sincerely appreciate your thoughtful reminder and will revise the description of this aspect in the final revision to ensure our claims are well-calibrated and clearly clarified. As for *data augmentation*, our crop-based training and segment-based testing augmentations are carefully designed to match the specific distributional differences between rendered and real-world images. These augmentations are tightly integrated with our methods and data. --- ***Concern 2: Slight improvement on ImageNet3D with data scaling.*** This observation is largely due to **differences in how object orientation is defined** in our dataset versus ImageNet3D for certain object categories. For example, objects like tables or skateboards—which exhibit front-back symmetry—are treated differently across datasets. ImageNet3D assumes these objects are always viewed from a canonical “front” (as described in Section A.1 of their paper), whereas in our dataset, they are annotated as having no meaningful orientation using symmetry-based criteria. This definitional mismatch limits the observed improvement when scaling up our training data for evaluation on ImageNet3D. However, as shown in our response to W1.1 & Q1.1 from Reviewer 83jQ, when we fine-tune Orient Anything directly on ImageNet3D’s training set, the model quickly adapts to its label definitions and significantly outperforms ImageNet3D-DINOv2-B. This result demonstrates both the transferability and adaptability of our method, indicating its potential as a foundational model for orientation estimation tasks. --- Once again, we sincerely appreciate your feedback and will clarify these points in the final version of the paper.
Summary: The paper introduces Orient Anything, a foundation model for zero-shot object orientation estimation. The key contributions include: 1) Leveraging 3D models and VLMs to annotate front faces, generating 2M synthetic images with orientation labels; 2) Modeling orientation as Gaussian distributions over angles (azimuth, polar, rotation) to improve training stability; 3) Using DINOv2 initialization and data augmentation (random cropping) to bridge domain gaps.; 4) A VQA benchmark revealing VLMs’ limitations in orientation understanding. Results show state-of-the-art zero-shot performance on real-world datasets (e.g., SUN RGB-D, KITTI) and significant improvements over VLMs (GPT-4o, Gemini) on orientation-related tasks. Claims And Evidence: Basically yes. 2M images from 55K 3D models across 7,204 categories (vs. 100 in ObjectNet3D). But there's no ablation studies for key components (distribution fitting, augmentation). COCO evaluation’s 8-direction simplification lacks clarity on mapping from 3D angles. Methods And Evaluation Criteria: Synthesizing data via 3D rendering and distribution-based training is sensible. Probability distributions effectively handle angle periodicity. Real-world benchmarks (SUN RGB-D, KITTI) are appropriate but compared unfairly to supervised models. Theoretical Claims: No theoretical proofs; methods are empirically validated. Experimental Designs Or Analyses: ​COCO Evaluation uses a simplified 8-direction task, which may not fully reflect 3D orientation. Mapping from predicted angles to directions needs clarification. Manual setting of variances (σ_θ, σ_φ, σ_δ) lacks sensitivity analysis. Impact of random cropping is asserted but not quantified. Supplementary Material: I can run the code. Relation To Broader Scientific Literature: Well-situated against 6DoF pose estimation and viewpoint detection. Connects to VLMs’ limitations via Ori-Bench. Missing discussion of recent synthetic-data approaches or self-supervised orientation methods. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Novel use of synthetic data, Ori-Bench benchmark, strong empirical results. Cons: Sometimes the results are not good in my experiments of running the provided code and checkpoint. Limited ablation studies, unclear real-world evaluation protocol. Other Comments Or Suggestions: N/A Questions For Authors: - How are 3D angles (θ, φ, δ) mapped to 8 horizontal directions in COCO evaluation? Could this simplification misrepresent orientation? - What is the quantitative effect of data augmentation on synthetic-to-real transfer? - How were σ_θ, σ_φ, σ_δ chosen? Was sensitivity analysis performed? - How to extend the proposed method to a video clip, where the object's orientation needs to be estimated in a temporal manner? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive review for recognizing the significance of our paper and invaluable suggestions. Below we respond to the comments in **Weaknesses (W)** and **Questions (Q)**. --- ***W1&Q2&Q3: Limited ablation studies for key components: 1. distribution fitting, 2. augmentation, 3. random cropping, 4. sensitivity for variances (σ_θ, σ_φ, σ_δ).*** In Table 5 of the current manuscript, we compare **different learning objectives** (continuous regression vs. discrete classification vs. distribution fitting), inference augmentations (box vs. mask), and **training augmentations** (with vs. without random cropping). In Figure 5, we analyze the impact of **different selections for (σ_θ, σ_φ, σ_δ)**. Generally, our method is not sensitive to the hyper-parameter. We respectfully inquire whether you are referring to other kinds of the ablation study. If so, could you kindly provide more specific settings, and we will make the necessary additions? --- ***W2: Clear real-world evaluation protocol.*** Currently, the 6 benchmarks in Table 2 are evaluated on real-world datasets and reported with quantitative 3D orientation error. Moreover, we have also tested on the latest and largest 3D orientation benchmark, ImageNet3D [1], which covers 200 object categories. The Acc@30° results are as follows: |Setting | Model | Avg. | Electronics | Furniture | Household | Music | Sports | Vehicles | Work | |---| --------------------------------- | ---- | ----------- | --------- | --------- | ----- | ------ | -------- | ---- | |Zero-shot| ImageNet3D-ResNet50 | 37.1 | 30.1 | 35.6 | 28.1 | 11.8 | **51.7** | 36.7 | **40.9** | || Orient Anything-B | **48.5** | **61.0** | **66.8** | **37.9** | **27.3** | 25.6 | **70.8** | 33.4 | |Fine-tuning| ImageNet3D-ResNet50 | 53.6 | 49.2 | 52.4 | 45.8 | 26.0 | 65.2 | 56.5 | 58.5 | || ImageNet3D-DINOv2-B | 64.0 | 75.3 | 47.9 | 32.9 | 23.5 | **74.7** | 38.1 | **64** | || Orient Anything-B | **71.3** | **77.6** | **89.7** | **64.4** | **54.4** | 47.6 | **87.4** | 61.2 | Note that we couldn't find detailed definitions for the major categories like *Electronics*, *Furniture*, and *Household* in ImageNet3D, so we used GPT to map its 200 categories into the 7 general categories. As a result, we used GPT to map its 200 categories into 7 broader ones. Therefore, the comparison results for each general category may vary, and the average score provides a more meaningful comparison. [1] Ma W, Zhang G, Liu Q, et al. ImageNet3d: Towards general-purpose object-level 3d understanding[J]. NIPS 2024 --- **W3: Missing discussion.** We will include more related works on synthetic-data approaches and self-supervised orientation methods. We also respectfully inquire if you could provide more specific related works. --- ***Q1: Mapping from predicted angles to directions in COCO evaluation.*** As discussed in Section 6.2, the COCO direction focuses only on the horizontal plane (e.g., azimuth angle). Specifically, we simply map the predicted azimuth angle (0-360°) to the 8 directions with 45-degree intervals. For example, 0±22.5° corresponds to the front, 45±22.5° to front-left, 90±22.5° to left, and so on. --- ***Q4: Extend to video clip.*** Simply performing per-frame predictions on video data, followed by cross-frame smoothing through simple averaging, can yield relatively consistent and accurate orientation estimation. Some examples are provided in <https://anonymous.4open.science/r/visualization-B728/Video_Cases/>.
Summary: This paper proposes Orient Anything, a method that obtains orientation through 3D assets and distilled VLM annotation. Although this paper is somewhat overclaimed, it is pioneering. Claims And Evidence: Yes Methods And Evaluation Criteria: The paper is meaningful, most previous academic research has focused on location or spatial relationships, especially robotic tasks. But I think this paper is over claims; the author only renders images on 80K 3D objects to generate orientations, which is far fewer than the sample size used in other xxx-Anything works such as Segment Anything, Depth Anything, etc., and even less than the COCO dataset. I am skeptical of its generalizability. In addition, the zero-shot performance in Table 3 appears to be not ideal. Theoretical Claims: N.A. Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The concept of orientation comes from object pose. This article directly obtains the orientation in the camera coordinate system from 2D images and does not depend on any template. Essential References Not Discussed: None Other Strengths And Weaknesses: This paper has many experiments and visualizations, and the combination of the model and the LLM has impressed me with its ability to enhance orientation understanding. Other Comments Or Suggestions: Recently, SoFar used a similar approach for orientation learning and understanding, and has been filtered and trained on a larger dataset (the full set of Objaverse1.0). I suggest that the authors use similar methods and data to scale up Orient Anything. SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation Questions For Authors: 1. Does Orientation-Anything generalize the position (on the corner) and size (too large or too small) of the object in the image? 2. Many items do not have a clear "front side", such as apples, footballs, light bulbs, etc., how should the orientation of such objects be defined? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive review for recognizing the significance of our paper and invaluable suggestions. Below we respond to the comments in **Weaknesses (W)** and **Questions (Q)**. --- ***W1&W2: Scaling up Orient Anything.*** Thank you for your suggestion. The SoFar dataset is really helpful, and we will incorporate it to further scale up. In fact, we are also actively working on expanding our training data to the scale of 10+ million to further improve Orient Anything. Recent breakthroughs in open-source 3D asset generation models (e.g., Hunyuan-3D [1], TRELLIS [2]) have demonstrated impressive results, with outputs now reaching sufficiently high quality. On the other hand, as discussed in Section 7.3 of our manuscript, the initial Orient Anything model can serve as an annotator to robustly label the orientation of 3D assets through multi-view voting. This voting mechanism helps reduce prediction errors, achieving robustness beyond the model's original capabilities. The above two observations show the potential to freely increase annotated 3D assets to any scale. In <https://anonymous.4open.science/r/visualization-B728/Synthesized_Assets_and_Voting_Annotation/>, we showcase examples of synthesized 3D assets and the voting-based orientation annotations using Orient Anything. We hope this further observation can address your concerns regarding scaling up. [1] Zhao Z, Lai Z, Lin Q, et al. Hunyuan3d 2.0: Scaling diffusion models for high resolution textured 3d assets generation[J]. arXiv preprint arXiv:2501.12202, 2025. [2] Xiang J, Lv Z, Xu S, et al. Structured 3d latents for scalable and versatile 3d generation[J]. arXiv preprint arXiv:2412.01506, 2024. --- ***Q1: Generalization to object position and size.*** In <https://anonymous.4open.science/r/visualization-B728/Corner_TooBig_TooSmall_Case/>, we provide visualizations of objects that are too large, too small, or on the corner. Overall, our model demonstrates generalization to these scenarios. --- ***Q2: Items do not have a clear “front side”.*** During annotating, 3D assets lacking a clear "front side" are explicitly labeled as having no front side, as shown in <https://anonymous.4open.science/r/visualization-B728/Training_Samples/>. We identify these cases using two methods: symmetry detection and VLM-based semantic understanding (as illustrated in Figure 3b, "Orientation Annotating"). During training, Orient Anything learns to predict a confidence score indicating the likelihood of an object having a clear front side. During inference, Orient Anything can predict low confidence scores for objects lacking a clear "front side" to reflect their ambiguous orientation. Actually, the "Judgment" column in Table 3 is the accuracy of judging whether an object has a distinguishable front face.
null
null
null
null
null
null
From Weight-Based to State-Based Fine-Tuning: Further Memory Reduction on LoRA with Parallel Control
Accept (oral)
Summary: This paper discussed the PEFT from a new view of control theory. From control theory, a new State-Based Fine-Tuning (State-FT) is proposed, where the network is modeled as a graph with each edge representing weights and each node representing activations. Thus, any components such as MLP or a couple of layers can be viewed as a unified unit, whose output state (activations) can be modified by the trainable parameters/non-linear function. This method can reduce GPU memory since the middle activations that need to be saved for calculating gradient are reduced. Experiments prove this method's effectiveness. ## update after rebuttal The authors have solved my concerns properly and I recommend accepting this paper. Claims And Evidence: Yes. There is clear and convincing evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. I have done my best to check the correctness of Theorem 4.1. Experimental Designs Or Analyses: The experimental designs (RoBERTa, LLaMA2-7B, LLaMA2-8B, ) are fair. Supplementary Material: Yes. I review the Proof of Theorem 4.1 Relation To Broader Scientific Literature: This paper mainly related to the PEFT area such as LORA and DoRA. Essential References Not Discussed: This paper has discussed related works. Other Strengths And Weaknesses: Strengths: 1. Clear writing and good motivation for this state-based FT method. 2. Good results. Weaknesses: 1. Lack of clear discussion about the difference between the current work and the pioneering work (Zhang et al., 2024b). 2. Although the paper may seem complicated, the method is very simple. I think the previous part about control theory is more like an excessive theoretical embellishment. 3. To provide more insight, could you provide more experimental results if using other units? For example, using a total block (contain ATTN and FFN) as the unit. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and positive comments. We would like to clarify on a few points to address the concerns raised. ### 1.Discussion about the difference between the current work and the pioneering work (Zhang et al., 2024b). We agree with the reviewer that it is important to clearly highlight the differences between our approach and previous work. To address this, we would like to include the following discussion in the revised version. In general, the differences between the current work and the pioneering work lie in two aspects. (1) **Different Aspects of Control Theory**. The previous work primarily focuses on the controllability aspect of control theory, aiming to understand how system parameters can be adjusted to influence system behavior. In contrast, our work introduces the concept of state-based tuning, drawing inspiration from feedback control systems. Rather than focusing on the controllability of the system, our method emphasizes how adjusting the system's state (as opposed to directly manipulating model weights) can lead to improved performance. This distinction highlights that these two papers target different areas of control theory. (2) **Memory and Computation Efficiency**. The most notable difference lies in the memory and computation efficiency. The prior approach enhances the LoRA method by introducing additional cross-patch attention mechanisms to modify the low-rank matrices. While this enhances the model's complexity, it also leads to increased memory consumption and training time due to the added operations and parameters. In contrast, our primary goal is to achieve memory efficiency by reducing the memory footprint, specifically by bypassing large intermediate states during model execution. --- ### 2. Control theory is like theoretical embellishment. We notice the reviewer’s concern regarding the role of control theory in our work and would like to clarify its necessity and impact. (1) **Bridging PEFT and State-Based Control**. While PEFT methods like LoRA primarily focus on tuning model weights, classical control theory is centered around adjusting system states, as seen in feedback control. These two perspectives may initially seem unrelated, but our key argument is that LoRA can be understood as a special case of state-based fine-tuning. This reframing is not just a theoretical exercise—it provides a new lens for understanding and designing PEFT algorithms. This allows our framework to establish a direct connection between PEFT and state-based control in deep NNs. This bridging alone is the key message that authors want to convey in the first part. (2) **Unlocking New Opportunities**. Control theory offers a wealth of results that could inspire and inform the study of PEFT algorithms. By framing LoRA within a control-theoretic perspective, we naturally arrive at several open research questions. For example, an essential question arises: Which states should we tune? This corresponds to selecting the appropriate edges on the computational graph, analogous to choosing control variables in feedback systems. Many well-established results from control theory could potentially be applied here, but they have not been fully explored in the PEFT community. Thus, the integration of control theory is not mere embellishment; it serves as a foundation that opens new avenues for both theoretical and practical advancements. We will ensure that this motivation is articulated more clearly in the revised version. --- ### Distinguishing the Proposed Method from LoRA The second part of our paper provides a shift where we reposition the low-rank matrices. We acknowledge that our approach is simple and appears structurally similar to LoRA, yet the underlying formulation corresponds to distinct control frameworks. (1) **Different Forms of Control Systems**. From a control-theoretic standpoint, LoRA and DoRA can be viewed as *non-affine* control mechanisms: $\dot{x}(t) = f(x(t), u(t))$, where the control is deeply entangled with the model’s attention mechanisms. In contrast, our proposed parallel control formulation aligns more closely with *affine control*: $\dot{x}(t) = f(x(t)) + x(t) u(t)$, where the control component is decoupled from the function f. This distinction is crucial—affine and non-affine controls represent fundamentally different classes of control design, each with unique stability and efficiency properties. Our work leverages this difference to optimize memory consumption. (2) **Practicality of the Proposed Algorithm**. Beyond above considerations, a key advantage of our method is its simplicity and ease of integration. By making minimal modifications to existing architectures, our approach remains practical and can be readily applied to various model designs. --- ### 3. More experimental results if using other units like a total block. Please kindly refer to our response to Reviewer 1 (hdej). --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal I still have some further questions after reading your rebuttal. 1. Which states should we tune? Could you show some examples by borrowing ideas from feedback systems? This would be interesting. 2. Affine and non-affine controls have different unique stability and efficiency properties. Could you explain these more clearly? --- Reply to Comment 1.1.1: Comment: These topic may slightly go beyond the scope of the current paper, but of scientific interest - we authors would be glad to engage in some discussions. ### Examples by borrowing ideas from feedback systems This is indeed an interesting yet open question. Current matrix selection in PEFT largely guided by empirical studies. For instance, Section 7.1 of the LoRA paper empirically investigates which weight matrices should be adapted under a constrained parameter budget. Classical control theory may offer several principles that could provide insights into this problem: 1. **Controllability Analysis**: When tuning a subset of parameters, priority should be given to those with the greatest influence on the system’s ability to reach desired states. The *controllability Gramian* quantifies how effectively states can be controlled, potentially guiding the parameter selection for adaptation. Notably, the *network controllability* [4] focuses on identifying the minimum number of driver nodes, whose control is sufficient to fully control the system's dynamics. This should be relevant to efficient finetuning. 2. **Relative Gain Array (RGA)**: The RGA offers a normalized representation of the system’s gain matrix, highlighting the relative influence of each control variable on various outputs. Given linearization for finetuning model, this framework may help identify the most effective parameter subsets for model adaptation. --- ### Discussions on Affine and Non-Affine Control In classical control theory, researchers and engineers often **prefer control-affine systems** when possible because they're easier to analyze and implement. To see this, consider a simplified DC motor model: ẋ₁ = x₂ ẋ₂ = -a·x₂ + b·u where x₁ = θ (position), x₂ = ω (angular velocity). This is a control-affine system. With state feedback control u = -k₁x₁ - k₂x₂, the closed-loop dynamics become:$ A = \begin{bmatrix} 0 & 1; \\ -b k_1 & -a - b k_2 \end{bmatrix} $ For stability, we may want poles at s = -p₁ and s = -p₂. This gives: k₁ = p₁p₂/b, and k₂ = (p₁+p₂-a)/b. This allows us to design control to reach the desired state in a **closed-form solution**. In contrast, a non-affine system such as ẋ₂ = -a·x₂ + b·tanh(u) requires *iterative* or *nonlinear* techniques, complicating stability analysis and tuning. #### the MPC Case Compared to non-affine control, affine control has more computational efficiency and analytical tractability. For instance, in model predictive control (MPC) for the dynamics $$ x_{n+1}=f(x_n)+g(x_n)u_n $$ with the quadratic cost function: $$ J=\sum_{k=0}^N \left[ (\bar x_{n+k}-x^{ref})^\top Q (\bar x_{n+k}-x^{ref})+u_{n+k}^\top R u_{n+k} \right]. $$ Here, $x^{ref}$ denotes the reference state. Since the system is affine in u, the cost function is **quadratic and convex** in $u$. In contrast, if the system is non-affine in $u$, the cost might be non-convex. #### Example from stability analysis Moreover, the stability analysis of control-affine systems is well-established in the literature[2], since the affine structure simplifies the construction of control Lyapunov and barrier functions. For example, the control Lyapunov function for a general control system $\dot x(t)=f(x(t), u(t))$ with control set $U$ is defined as a function $V>0$, such that for any $x$, $$ \inf _{u\in U} \langle \nabla V(x), f(x,u) \rangle < 0. $$ For general non-linear systems, such a condition is difficult to identify. However, for control-affine systems $\dot x(t)=f(x(t))+g(x(t))u(t)$, the condition becomes more concrete as $$ \inf _{u\in U}\left[L_f V(x)+L_g V(x) u\right] < 0 $$ where the optimization problem is **linear in $u$**. Here $L_f V$ denotes the Lie derivatives of $V$ along $f$. Withproper linearization techniques, the Lyapunov function can be designed for many systems. #### When do we use non-affine control? Conversely, non-affine control is more difficult to analyze due to the non-linear dependence on control. However, they may provide more flexibility and potential benefits in complex systems. Non-affine control systems might be preferred when: * The physical system *inherently* has non-affine dynamics * More complex control behaviors are required or the application demands non-linear control responses. Finally, LoRA operates as a non-affine control mechanism, whereas our method follows an affine-control framework. This paper focuses on the memory reduction part, and the theoretical benefits need further study. --- [1] Model predictive control: theory, computation, and design (Vol. 2). Madison, WI: Nob Hill Publishing, 2017. [2] A survey on the control lyapunov function and control barrier function for nonlinear-affine control systems. IEEE/CAA Journal of Automatica Sinica, 2023. [3] Model predictive control for nonlinear affine systems based on the simplified dual neural network. In IEEE Control Applications,(CCA) & Intelligent Control, 2009. [4] Controllability of complex networks. Nature, 2011.
Summary: This paper presents a state-based fine-tuning framework, which can avoid storing large intermediate states during training. Empirical results show its effectiveness. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense. Theoretical Claims: I have checked the correctness of any proofs for theoretical claims. Experimental Designs Or Analyses: I have checked the soundness/validity of any experimental designs or analyses. In Table 4, the comparison between ChatGPT and other baseline methods seems unfair, as there is no fine-tuning in ChatGPT. Supplementary Material: I have reviewed the supplementary material, i.e., the code. Relation To Broader Scientific Literature: The key contributions of the paper are related to the broader scientific literature in PEFT methods, such as LoRA and DoRA. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: 1. This paper is well-written and easy to read. Figures 1, 2 clearly show the advantage of the proposed method. 2. The experiments of this paper validate the proposed method's efficient. Weakness: 1. The contribution of this work is limited. The idea of state-based fine-tuning seems to apply the LoRA from the QKV matrix to the FFN/ATTN block, which appears to be an incremental extension of existing weight-based methods like LoRA. Moreover, the authors acknowledge that LoRA can be viewed as a special case of their framework (Section 3.3). This suggests that the proposed method is not fundamentally new but rather a generalization of existing techniques. 2. The empirical results, while showing some improvements in memory efficiency and training time, do not convincingly demonstrate the superiority of the proposed method. The performance gains are marginal (e.g., 0.12% improvement in accuracy on ViT, Table 1) 3. The paper lacks a strong theoretical foundation to justify the state-based tuning framework. The connection between control theory and state-based tuning is superficial. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable suggestion. ### 1. The idea of state-based fine-tuning seems to apply the LoRA from the QKV matrix to the FFN/ATTN block We understand the reviewer’s concern that our contribution may seem limited to proposing an algorithm, and we would like to clarify on this. First of all, our goal of proposing the State-FT method is to provide a **new viewpoint** to link the LoRA method with control approaches. While LoRA is typically framed as a low-rank fine-tuning method that modifies specific model weights, this contrasts sharply with classical control theory, which focuses on adjusting system states, as seen in feedback control. These two areas have largely developed independently. A central argument of our work is that LoRA can, in fact, be viewed as a special case of state-based fine-tuning. As noted by Reviewers hdej and uZCq, our framework aims to establish a direct connection between parameter-efficient fine-tuning and control formulations in deep neural networks. This is the key message we want to deliver when establishing the state-based FT. Such a viewpoint also paves the way for new algorithmic design. We propose to treat fine-tuning neural networks as modifications to its computational graph (DAGs), allowing us to rethink tuning as a structural modification of the network itself. This means we can introduce or remove specific edges within the computation graph based on task-specific requirements, rather than being limited to conventional weight adjustments. For example, we can add edges that may not exist on the original network, to alter the computation flowmodifying the computation flow—something not captured by the original LoRA method. This viewpoint raises several **open questions** within the state-based tuning paradigm. For instance, *which states should be adjusted within the network, and how can we determine the optimal control strategy*? There are many well-established results from control theory, and we would expect them to be applied to finetuning research. Finally, we introduce an algorithm that tunes entire blocks as a unified unit. Our design philosophy is straightforward: **prioritize simplicity and effectiveness**. By making the whole architecture simple and making minimal changes, we expect our approach remains broadly applicable across various network architectures. In the meanwhile, we would expect the algorithm to be effectively reduce the GPU memories, as demonstrated in the following experiments. In summary, we want to emphasize that our goals would extend beyond one specific algorithm—we aim to bridge weight-based fine-tuning with the well-established principles of state-based control theory. This is the primary contribution of this paper. This perspective not only offers a fresh way to interpret LoRA but also raises several open questions, paving the way for future exploration. We will make these points clearer in the updated version. --- ### 2. The empirical results show some improvements in memory efficiency and training time, but the performance gains are marginal as 0.12%. As our title suggests, the core objective of our method is to minimize memory consumption and computational overhead, making it an efficient solution. We demonstrate that our approach reduces memory usage from 18.010GB to 12.280GB on the ViT toy model, achieving a 31.8% reduction. The performance evaluations we include serve a specific purpose: to demonstrate that these efficiency gains do not come at the expense of model quality. We hope the reviewer can understand that the 0.12% difference is not intended to suggest a significant improvement over the baseline. Rather, its goal is to convey another message—that our method enhances efficiency without sacrificing effectiveness. --- ### 3. Strong theoretical foundation to justify the state-based tuning framework. Overall, we hope the reviewer can understand that this paper is not intended to be a theoretical paper. Instead, the analysis is mainly to support our claims. As alluded to earlier, the primary objective of our algorithm is to demonstrate that improvements in memory and computational efficiency do not come at the cost of model performance. Therefore, our analysis is centered on this key point. For linear cases, Theorem 4.1 establishes that the parallel control method retains the same expressive power as LoRA, provided both methods share the same total rank. This guarantees that adopting parallel control does not lead to a loss in expressiveness. For nonlinear cases, Theorem 4.2 further demonstrates that parallel control can offer greater adaptability, particularly in scenarios where the original model exhibits degeneracies. These results reinforce the robustness of our approach across different settings. If the reviewer has any additional concerns regarding necessary analyses, we would greatly appreciate the feedback and are willing to address them accordingly. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed explanations, which address most of my concerns. After reading the rebuttal, I acknowledge that using the control viewpoint offers new insights into LoRA. Accordingly, I will raise my score to 3. However, as reviewer eRj7 also noted, I still believe that further connecting control theory or methods to state-based tuning would strengthen the paper. --- Reply to Comment 1.1.1: Comment: Thank you, Reviewer jhpo! We will refine the control theory discussion for better clarity. Specifically, we will add a subsection to explicitly highlight the non-affine control property of LoRA, while emphasizing that our method follows an affine-control approach. We hope this distinction will more clearly differentiate our method from a control theory perspective.
Summary: This paper proposes a novel state-based fine-tuning framework named State-FT for parameter-efficient algorithms. The authors shift the focus from traditional weight-based adaptations (e.g., LoRA and its variants) to directly optimizing the model’s intermediate forward states. Based on the inspiration of the control theory, the proposed method referred to as state-based fine tuning (State-FT), directly modifies the intermediate states in the computation graph rather than adjusts the model’s weight matrices. The authors demonstrate that State-FT significantly reduces GPU memory usage by compressing intermediate layers using ‘DoubleControl’ methods. Experiments show that this approach maintains or outperforms the GLUE benchmark and commonsense reasoning tasks. Furthermore, it scales to larger models such as LLaMA2-7B and LLaMA-8B, enabling fine-tuning on GPUs that usual consumers use (under 24GB) without any quantization methods. Claims And Evidence: The paper’s main claims are convincingly supported by comprehensive theoretical control theory bases and experimental results. The proposed state-based fine-tuning approach shows significant GPU memory reductions (approximately ~3GB on medium-sized models RoBERTa and up to 40$ reduction on large models such as LLaMA2-7B and LLaMA3-8B), as the authors claimed. Additionally, the experimental results consistently show competitive or higher accuracy performance compared to baselines such as LoRA and DoRA, supporting the claim of representational capacity. However, one of the limitations is the lack of explicit comparisons to quantization-based methods (e.g., QLoRA), which also aim at memory reduction. This is one of the concerns that the proposed method is showing effective since it lacks the baselines. Furthermore, experimental validation for large-scale models focuses primarily on NLP and commonsense reasoning tasks, while only a limited CIFAR-100 evaluation is provided for vision tasks. This scope raises concerns about the method’s applicability across diverse domains. Methods And Evaluation Criteria: The method not only introduces a novel framework but also clearly justifies it, showing how the proposed method generalizes and explains the better representational capacity. All of the proposed frameworks (State-FT and double control) are well motivated and logically supported, making effective use of control theory concepts to reduce total GPU memory usage. The evaluation involves standard benchmarks such as GLUE and commonsense reasoning datasets previously conducted in the DoRA. It provided GPU memory usage, training time, and parameter counts comparing their approach with the original LoRA and DoRA under the same settings. However, for the vision domains, the evaluations were limited, showing only the toy example since the proposed method could not be found to be effective. Theoretical Claims: Section 3.3 of State-FT provides that the LoRA can be considered a special case of the proposed framework. There seem to be no issues with the theoretical claims. Experimental Designs Or Analyses: For the experimental settings, the authors show toy examples (ViT) to large-size datasets (LLaMA2-7B, LLaMA3-8B), GLUE, and commonsense benchmarks. They analyzed a number of parameters, GPU memory, and training time for each experimental setting that could prove their method’s effectiveness. Supplementary Material: The reviewer reviewed all the parts of the supplementary material in Appendix A-B. Relation To Broader Scientific Literature: This paper provides new aspects over what the original parameter-efficient training tries to approach (weight-matrices approaches). In addition, they reduce the memory issues without any quantization schemes that were typically conducted for memory reduction. State-FT contributes to PEFT (parameter-efficient training) and memory efficiency areas such as QLoRA or parameter-sharing, while the work more directly addresses activation memory. Essential References Not Discussed: AdaLoRA (Zhou et al., 2023), which is related to dynamic rank allocation, should be additionally referenced for the related works. Other Strengths And Weaknesses: **Strengths** 1. Provides novel aspects of the PEFT area that original works did not: As this paper mentions, the authors were inspired by the control theory, which supports new aspects and possibilities in the PEFT area. This novelty distinguishes it from previous PEFT methodologies and shows comparable results. 2. Provides strong theoretical justifications: The authors successfully demonstrate how state-based perturbations generalize and enhance the representation capabilities of original LoRA methods. They support their methods both theoretically and experimentally. 3. Clear justification and motiviation: The necessity of the method is well presented, particularly through detailed analyses breaking down GPU memory usage. The paper highlights critical memory bottlenecks in previous PEFT approaches, clearly motivating the need for their ‘double control’ method. In the end, they successfully show its practical effectiveness on consumer GPU hardware. **Weaknesses** 1. Lack of comparison with quantization-based methods: Although the paper mentions significantly reducing GPU memory usage, it omits direct comparisons with quantization-based fine-tuning techniques such as QLoRA or QALoRA. Since the paper focuses on memory efficiency, evaluating or discussing results against QLoRA seemed to be needed. 2. The proposed method is not guaranteed to work in other domains, such as vision, since it only shows toy examples. Previous work (DoRA) conducted effectiveness with image-video-text understanding domains, so providing results from other domain tasks might increase the confidence of this paper. 3. Limited analysis of parameter sensitivity. This paper lacks sensitivity studies regarding the impact of hyperparameter settings (e.g., rank). Including sensitivity experiments could strengthen the paper’s framework. Other Comments Or Suggestions: 1. The term ‘parallel control’ in Section 4.1 248 could confuse the readers that it refers to ‘parallel pipelining.’ Using the term ‘block’ in this case seems more appropriate, as the control unit is not working in parallel. In addition, the proposed method is mainly about ‘double control’, so the title seems to make sense ‘parallel control’ into ‘double control.’ 2. Adding the results (at least QLoRA) might strengthen this paper’s contribution. The quantization method was mainly raised to reduce memory efficient training and complement the performance degradation. If the performance degradation is severe, State-FT’s necessity might be significantly increased. If not, it might be better that State-FT could apply the QLoRA method independently, as DoRA already showed. 3. As the author mentioned their work limitations, it might be comprehensive to provide how much slower they were in the inference phase, at least in the appendix. Questions For Authors: 1. Could the authors show the results, including the QLoRA and QA-LoRA quantization methods? In addition, could State-FT use the QLoRA method independently? 2. Could the authors provide a sensitivity study of changing rank settings? 3. The proposed method seems to be more efficient for large models such as LLaMA-13B or above. Could the authors provide results for larger models that could explain this method’s scalability? 4. The results were only shown for the vision tasks for a small dataset (CIFAR-100). Could the proposed method be adopted for the Image-Video-Text understanding domains that use model backbone VL-BART, such as VQA, GAT, NVLR, and COCO Caption tasks? In the commonsense reasoning tasks, the performance seemed to be competitive with the existing method, but the question remains whether State-FT could remain accurate on multi-modality fine-tuning tasks, as DoRA has already shown. It might strengthen the paper if the state-FT works on even the multi-modality domains. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for your positive and insightful feedback. ### 1. Quantization Method like QLoRA/QA-LoRA. Can State-FT use QLoRA method independently? Yes, the State-FT method can independently leverage the QLoRA/QA-LoRA approach. GPU memory usage mainly arises from three sources: model weights, forward states, and backward gradients. QLoRA and QA-LoRA reduce memory by quantizing *model weights* and using low-rank matrices to minimize *backward gradients*. In contrast, State-FT focuses on reducing *forward states* and *backward gradients*. These methods are complementary, addressing different sources of memory cost. State-FT can be combined with the QLoRA method. This allows control to be operated on a quantization model, hence reducing memory consumption across all three categories. To validate this, we evaluate on the commonsense dataset with 8-bit LLaMa3-8B model. ||GPU Memory|Training Time|Accuracy| |-|-|-|-| |Double-Control|22.176 GB|20h33mins|84.7| |Q-Double-Control|17.796 GB|39h20mins|84.3| As shown, the use of QLoRA further reduces the memory consumption of the double-control approach by an additional 4.38 GB. However, this reduction in memory comes with trade-offs: training time increases by 91.40% and accuracy drops by 0.4%. We further investigated the effects of quantization with 4-bit methods on the RTE dataset. ||GPU Memory|Training Time|Accuracy| |-|-|-|-| |Control|12.634 GB|4min59s|$76.89 \pm 0.78$| |Q-Control|4.548 GB|9min03s|$73.41 \pm 1.36$| |Q-LoRA|4.688 GB|9min36s|$71.84 \pm 2.23$| |Q-DoRA|4.742 GB|11min03s|$71.00 \pm 0.94$| (1) As shown above, 4-bit quantization techniques effectively reduce GPU memory consumption. (2) Like its full-precision counterpart, Q-Control offers slight gains in memory, training time, and accuracy by reducing forward state memory, though overall savings remain modest. (3) Similar to 8-bit, this reduction comes at a cost—Q-Control incurs a 3.48% accuracy drop and an 81.61% increase in training time on the RTE dataset. In summary, the proposed state-FT can also benefit from methods like QLoRA. Quantization can effectively further reduce memory consumption for control and other PEFT methods, but comes with trade-offs, such as increased training time and slight performance degradation. --- ### 2. Sensitivity Study of Rank Settings. We conduct a sensitivity analysis on the RTE dataset by varying the control rank to evaluate its impact on GPU memory usage, training time, and accuracy. ||GPU Memory|Training Time|Accuracy| |-|-|-|-| |r=1|12.620 GB|4min58s|$73.05 \pm 0.34$| |r=4|12.622 GB|4min59s|$73.41 \pm 1.48$| |r=8|12.626 GB|4min59s|$74.37 \pm 0.59$| |r=16|12.634 GB|4min59s|$76.89 \pm 0.78$| |r=32|12.674 GB|5min00s|$77.17 \pm 0.34$| |r=64|12.724 GB|5min02s|$77.01 \pm 0.68$| As the rank of the control parameters increases, GPU memory usage remains relatively stable, with only a slight rise. This is due to the use of low-rank matrices, which help reduce the memory consumption of backward gradients. For accuracy, we observe a steady improvement up to a rank of 32. However, at a rank of 64, performance slightly declines compared to rank 32, indicating diminishing gains beyond a certain threshold. Next, we compare the performance of Control, LoRA, and DoRA across different ranks: ||Control|LoRA|DoRA| |-|-|-|-| |Accuracy(r=8)|$74.37 \pm 0.59$|$74.84 \pm 1.48$|$74.97 \pm 1.45$| |Accuracy(r=16)|$76.89 \pm 0.78$|$75.79 \pm 1.57$|$76.05 \pm 1.67$| |Accuracy(r=32)|$77.17 \pm 0.34$|$75.97 \pm 2.07$|$76.65 \pm 1.19$| |Accuracy(r=64)|$77.01 \pm 0.68$|$75.93 \pm 1.78$|$76.78 \pm 2.01$| (1) At lower ranks (r=8), LoRA/DoRA achieve higher accuracy than the control method. (2) As the rank increases (r=16 and above), the control method begins to outperform both LoRA and DoRA. (3) Given that increasing rank to 16 or 32 has a minimal impact on training time and memory usage, we recommend using a relatively high rank to maximize performance. (4) This trend is also observed on CoLA and SST-2 datasets. --- ### 3. Large Models like 13B, and Different Domains We expect State-FT to achieve similar memory reductions on larger models like 13B. Training these models may take several days or even weeks, exceeding the rebuttal period. We are actively working on these experiments. Below, we present preliminary results from a multi-task evaluation with VL-BART: |Method|VQA|GQA|NVLR|COCO|Avg| |-|-|-|-|-|-| |FT|66.9|56.7|73.7|112.0|77.3| |LoRA|65.2|53.6|71.9|115.3|76.5| |DoRA|65.8|54.7|73.1|115.9|77.4| |Control|65.9|55.1|72.7|115.9|77.4| In terms of efficiency, the control method achieves a 31.6% reduction in memory usage and a 65.3% reduction in computation time compared to DoRA. --- ### 4. Essential References Not Discussed We notice [1] is a relevant paper and will include it in our reference list. --- [1]AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning. ICLR 2023. --- Rebuttal Comment 1.1: Comment: The authors have provided adequate responses to my concerns. The results now seem to be more solid. I spent considerable time on whether to raise my score to a 4. My concern comes from the fact that the proposed method essentially adds another path to the model, even though it is supported by a solid derivation. However, since simple solutions that perform as well as more complex ones deserve recognition, I have decided to adjust my score to a 4. --- Reply to Comment 1.1.1: Comment: Thank you sincerely for your thoughtful consideration and for acknowledging our responses and the improvements made to the paper. Your feedback has been both encouraging and instrumental in helping us strengthen our work. We’re glad that the simplicity and effectiveness of our approach came through, and we truly appreciate the time and care you invested in the evaluation. Thank you once again for your valuable input!
Summary: This paper proposes a new state-based fine-tuning framework that allows tuning entire residual blocks or multiple sequential sub-layers instead of adding adapters to each layer. The method significantly reduces the memory footprint by avoiding the storage of large intermediate activations while maintaining fine-tuning performance. Claims And Evidence: Overall, authors' claims are backed up by the provided experiments, which appear consistent with their theoretical motivation. More precisely, the experimental results support the paper’s claim that state-centric fine-tuning is more efficient and has same performance as weight-centric fine-tuning. Methods And Evaluation Criteria: Yes, they do. In terms of methods, the paper’s core contribution is to replace weight-based LoRA with a “control-based” injection of low-rank updates in the forward pass. Instead of decomposing weight updates in Q/K/V or feed-forward layers, the method combines entire residual (or multi-layer) blocks into one function and introduces a separate control path that is likewise parameterized by low-rank matrices. In terms of evaluation criteria, the authors rely on standard accuracy metrics for classification and QA tasks (e.g., MNLI accuracy, STS-B Pearson correlation, CoLA Matthew’s correlation, etc.), as well as training resource metrics such as GPU memory consumption and wall-clock training time. These metrics are appropriate for testing whether a method is truly more parameter- or memory-efficient. Theoretical Claims: Yes. I checked proofs of theorems 4.1 and 4.2 (guarantee of parallel control method performance). No major issues. Experimental Designs Or Analyses: Yes. No major issues. The experiments cover both small-scale (CIFAR-100 on ViT) and medium-scale (GLUE tasks on RoBERTa) fine-tuning, as well as large-scale LLaMA2-7B/ LLaMA3-8B models for common-sense QA tasks. The study ablates memory usage, training speed, and final performance scores. These are well-chosen metrics for a new PEFT algorithm. One potential limitation is that each dataset is evaluated with a specific set of hyperparameters. It would be helpful to see more ablation on rank choices or to confirm that the approach remains stable with smaller ranks on more difficult tasks. Supplementary Material: Yes. All of them. It would be helpful if the supplementary (or final version) provides additional derivations of the theorems or example pseudocode showing how to integrate the control approach with standard DL frameworks Relation To Broader Scientific Literature: The paper is a novel connection between parameter-efficient fine-tuning and optimal control formulation for deep neural networks. The paper positions itself as a generalization of LoRA in the context of control theory, a line of reasoning that has emerged in prior work linking neural networks to ODEs or closed-loop control systems. This is a valuable perspective The discussion of other memory-efficient techniques (like QLoRA or other quantization-based PEFT methods) is comparatively brief, but the authors’ emphasis is on building upon LoRA-style low-rank methods, so this is understandable. Essential References Not Discussed: The paper covers the main PEFT references (LoRA, DoRA, and adapter/prompt-tuning families) and also cites relevant control-theoretic works. If anything, it might be beneficial to discuss more systematically how these “state-based” methods compare to existing large-activation checkpointing or gradient checkpointing strategies, which also address memory usage. Such a comparison would clarify the practical trade-offs. Other Strengths And Weaknesses: Strengths: - Comprehensive and strong experimental results. - Significant memory savings allow training 7B/8B models on a single 24GB GPU (Nvidia 3090). - The conceptual link between weight updates and states/controls may inspire further research. Weaknesses: - Merging weights or adapters from multiple “control edges” can be more complicated in practice than with purely weight-centric methods. The authors do note an increase in inference time, though they characterize it as modest. - More ablation and clarity on how exactly “skipping” big intermediate states is implemented (particularly for widely used frameworks like PyTorch) would strengthen the practical dimension of the paper. Other Comments Or Suggestions: Including a clear, step-by-step pseudocode for the “DoubleControl” approach in the Supplementary can help readers adopt the technique. Questions For Authors: 1. How sensitive is the method to which blocks get controlled? Have the authors tried controlling only a portion of the MLP layers vs. controlling entire multi-head attention and feed-forward blocks together? 2. Can the authors elaborate on any overhead or complexity added during inference when “control edges” are introduced? Do you anticipate issues in deployment on standard inference platforms? 3. Is it feasible to combine two sets of “control edges” learned on different tasks? If so, how would that composition look? 4. Could gradient checkpointing or other memory-reduction approaches be integrated into this method to push memory usage down even further? Ethical Review Concerns: No ethics concerns. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank reviewer for the insightful and constructive feedback. ### 1. Sensitivity to the Choice of Controlled Blocks Controlling either the MLP or attention layer yields comparable performance, as demonstrated by the ViT model results: ||MLP|Attn|Full Block| |-|-|-|-| |Performance|$91.96\pm0.05$|$91.93\pm0.03$|$91.53\pm0.11$| |GPU Memory|12.280G|12.480G|12.196G| These results indicate that controlling either the MLP or attention layer individually leads to similar performance, with attention control slightly reducing accuracy and increasing GPU memory usage. In contrast, controlling the entire residual block results in a more notable 0.43% performance drop. This is likely because each ViT layer consists of two distinct residual blocks, and treating the entire block as a single unit reduces its effectiveness. To further examine the effects of tuning different portions of the MLP, we selectively tune either the first six or the last six layers of the ViT model while doubling the rank to maintain the same parameter number. ||First 6 Layers|Last 6 Layers|All 12 Layers| |-|-|-|-| |Performance|$91.14\pm0.03$|$87.23\pm0.11$|$91.96\pm0.05$| |GPU Memory|11.876G|7.180G|12.280G| These results highlight that selectively tuning a subset of the MLP layers leads to a more pronounced drop in accuracy. Specifically, tuning only the first six layers results in a 0.82% accuracy reduction. On the other hand, tuning only the last six layers leads to a more substantial memory reduction, as forward states for the first six layers no longer need to be stored. However, this comes at the cost of a significant 4.73% accuracy drop. While these experiments are conducted on the ViT model, we observe a similar trend across other architectures. These ablation studies will be added to the appendix. --- ### 2. Combining two sets of control edges. If two sets of edges share the same starting and ending nodes on a pretrained model, their combination can be conducted as existing LoRA merging techniques, such as ZipLoRA [1], K-LoRA [2] and SVD methods [3-4]. For example, given two sets of additional edges with weights $\Delta W_1$ and $\Delta W_2$, we can apply a similar approach as [1] and optimize $\ell_{\text{merge}}$ with $$\Delta W = v_1 \circ \Delta W_1 + v_2 \circ \Delta W_2,$$ where $v_1$ and $v_2$ are vectors, and $\circ$ denotes element-wise multiplication, such that the j-th column of $\Delta W$ is scaled by the j-th element of v. The purpose of this lightweight tuning is to mitigate potential conflicts between different sets of low-rank matrices. If the two edges have different starting and ending nodes, they can be incorporated into the computation graph independently. However, two key considerations must be addressed. (1) Preserving the Directed Acyclic Graph (DAG): It is essential to ensure that the newly introduced edges do not violate the DAG structure. For example, if the edges include both $A \to B$ and $B \to A$, a cycle would be introduced, requiring structural trimming. (2) Resolving Conflicts: Different sets of edges may result in conflicting weight updates. To optimize performance, lightweight tuning remains a recommended step. --- ### 3. Compatibility with Gradient Checkpointing and Memory-Reduction Methods. Yes, we use gradient checkpointing when training 7B/8B models to reduce memory. For other methods like quantization, please kindly refer to our response to Reviewer 2 (uZCq). --- ### 4. Overhead During Inference and Deployment on Standard Inference Platforms. Extra control edges may lead to a slight increase in inference time. In the worst-case scenario, such as the double-control approach with two extra edges, our experiments show that the extra inference time is around $2.1\% \pm 0.3\%$ extra inference time for RoBERTa-base, and $4.6\% \pm 0.5\%$ for the LLaMA2-7B model. The introduction of control edges primarily impacts the forward pass by modifying the computation graph. Despite this, our method remains compatible with standard inference platforms like HuggingFace Transformers and Accelerators. While certain platform-specific optimizations, such as quantization, may require some adaptation, we do not anticipate significant changes. --- ### 5. Pseudocode and How to Implement Skipping Intermediate State. In general, steps to skip big intermediate states include:(1) Inherit the original layer and insert low-rank matrices; (2) Define a control function to compute the controlled state $x_c$; (3) Add $x_c$ to the original output. We will include the pseudocode in the appendix to provide further clarity and would like to thank you once again for this valuable suggestion. --- [1] ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs. ECCV 2024. [2] K-LoRA: Unlocking Training-Free Fusion of Any Subject and Style LoRAs. ArXiv 2025. [3] Task Singular Vectors: Reducing Task Interference in Model Merging. Arxiv 2024. [4] Model merging with SVD to tie the Knots. ICLR 2025.
null
null
null
null
null
null
Competitively Consistent Clustering
Accept (poster)
Summary: When clustering dynamic data (with insertions and/or deletions), consistency of updated solutions is a concern that can be more relevant that optimal cost in practice. This work studies fully dynamic clustering algorithms for k-center, facility location, and k-median with competitive recourse. For fully dynamic inputs, the pathological example that required linear recourse inserts and deletes the same $O(1)$ points over and over again so that the value of an optimal solution oscillates between two magnitudes. Let $OPT^\\beta_R$ be the recourse of an optimal offline algorithm with multiplicative cost approximation guarantee $\\beta$ for the problem at hand, and $\\Delta$ be the aspect ration of the input. The authors obtain the following dynamic algorithms: 1. For k-center, a bi-criteria approximation that uses $(1+\\alpha)k$ centers and has multiplicative $O(\\beta)$ cost approximation with recourse $O(\\log n \\log \\Delta / \\epsilon^2) OPT^\\beta_R$. 2. For facility location, it maintains a multiplicative $O(\\beta)$ cost approximation with recourse $O(\\log |F| \\log \\Delta) OPT^\\beta_R$, where $F$ is the possible set of centers. 3. For k-median, a bi-criteria approximation that uses $O(k)$ centers and has multiplicative $O(\\beta)$ cost approximation with recourse $O(\\log n \\log \\Delta) OPT^\\beta_R$. The authors also show that at least one of the logarithmic dependencies is required and conduct experiments in the appendix. ## update after rebuttal The authors addressed my questions and could clarify my doubts. I adjusted my scored accordingly. Claims And Evidence: All claims are backed by proofs. Methods And Evaluation Criteria: Rigorous theoretical analysis combined with experiments is a very reasonable approach. Theoretical Claims: Checked plausibility of proofs in the main part and skimmed through the appendix for completeness. Experimental Designs Or Analyses: The extent of the experimental setups (3 datasets, one choice for k) is a bit limited, but the setup itself is appropriate. Supplementary Material: No. Relation To Broader Scientific Literature: Consistent clustering of dynamic data is an emerging topic in theory, and inherently important in many practical applications. Essential References Not Discussed: None identified. Other Strengths And Weaknesses: The paper introduces algorithmically nice and simple, but analytically non-trivial rounding schemes on top of a known algorithm for maintaining dynamic fractional solutions by Bhattacharya et al. By using competitive recourse, the authors show that one can circumvent the trivial worst-case bounds for fully dynamic algorithms and optimal close(r) to optimal recourse. On the other hand, bi-criteria solutions that use $O(k)$ centers that are compared to offline algorithms that use $k$ centers open a new gap. E.g., consider k-center and $k+1$ input points (or clusters) at large distance. While an optimal solution needs to pay the distance at least one, an algorithm that uses $k+1$ centers can achieve cost 0. Nevertheless, not using exactly $k$ but $O(k)$ centers is a concern that often lies mostly on the theory side of the problem. Other Comments Or Suggestions: It seems appropriate to mention in the abstract that the algorithms for k-center/k-median actually compute bi-criteria approximations. Whenever $k'$, it may be worth to consider if it's useful to mention its best known value as derived from the analysis in appendix A. Questions For Authors: 1. Can you add plots that show the actual number of centers that the dynamic algorithm opened? 2. Can the k-median results be extended to k-clustering (e.g., k-means)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the thorough and very positive review. We will mention in the abstract that the algorithm are bi-criteria. Indeed, theoretically, the gap between a solution that uses k and k+1 centers may be large. However, we believe this rarely happens in practical instances. The rebuttal platform does not allow for us to upload new plots, but we have added the plot of # centers opened as a function of time to our manuscript. In both our k-center and k-median experiments, the algorithm almost always opens no more than k centers. We did not think about the k-means objective. This is indeed a great and interesting future research problem.
Summary: This paper studies fully dynamic consistent clustering, specifically focusing on the $k$-center, facility location, and $k$-median problems. Previous work has focused on algorithms maintaining solutions close to optimal while minimizing recourse—the number of changes to centers over time. The key innovation here is the consideration of a "beyond worst-case" scenario. The authors design algorithms that maintain approximate solutions with recourse bounded competitively against the minimal recourse achievable by an offline algorithm. The main technique builds on the Positive Body Chasing framework recently introduced by Bhattacharya et al. [FOCS 2023], which provides fractional solutions to clustering problems. The paper's primary contribution is developing rounding methods to convert fractional solutions to integral ones while preserving approximation and recourse guarantees. Claims And Evidence: Proofs support the theorem statements. The paper explains high-level proof techniques quite clearly, and to my best comprehension, the proofs presented in the main body and appendix are correct. However, there is one important concern about consistency in the problem definition. The paper considers clustering formulations ($k$-center, facility location, and $k$-median), typically defined with at most $k$ centers allowed. Yet, in Theorem 1.1, the proposed algorithms use $(1+\varepsilon) \cdot k$ or even $O(k)$ centers, where $\varepsilon \in (0, 1]$. This contradicts standard definitions in the literature, where dynamic algorithms strictly maintain exactly $k$ centers (e.g., Łącki et al., 2024; Bhattacharya et al., 2024). Allowing extra centers makes the problem strictly easier, possibly trivializing it, as one could achieve approximation ratios smaller than 1 for the $k$-center problem by opening additional centers. I realise this issue is acknowledged briefly in the conclusion, but it should be justified and explained more clearly. Methods And Evaluation Criteria: The experiments compare the proposed method against the optimal fractional solutions and the theoretical bounds stated in the theorems, which is appropriate. However, the paper lacks experimental comparisons to previous methods from the literature, such as "Efficient and Stable Fully Dynamic Facility Location" by Bhattacharya et al. [NeurIPS 2022], which implemented a related algorithm for dynamic facility location. Theoretical Claims: I checked all proofs and found no issues. Experimental Designs Or Analyses: From the text the experimental design seems sound and valid. I wanted to check the code as well, but unfortunately no supplementary material was provided. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper contributes to the growing body of work on dynamic clustering and online approximation algorithms. Previous work on fully dynamic clustering focused on absolute recourse, e.g., Łącki et al. (2024) and Bhattacharya et al. (2024). In contrast, this work introduces the notion of competitively consistent clustering, aligning itself with recent trends in the broader literature that advocate competitive analysis against optimal offline recourse. This competitive recourse perspective originates from Bhattacharya et al. [FOCS 2023], and the current paper is (to the best of my knowledge) the first to effectively extend this framework to classical clustering objectives through tailored rounding schemes. Essential References Not Discussed: N/A Other Strengths And Weaknesses: S1) I think the overall proof techniques are quite nice. Combining the Positive Body Chasing framework with the different rounding mechanisms is nice, and I could see it having potential future impact. Other Comments Or Suggestions: 1. line 036, right column: Can use \citet to reference Lacki et al. (2024). Also on line 085, right column, among other places. 2. In Theorem 1.1, $\varepsilon$ is not defined. Questions For Authors: Q1. In the introduction you describe the problem as: $k$-center, in which we can open _at most_ $k$ centers (...). However, in the statement of Theorem 1.1, your maintained solution uses $(1+\varepsilon) \cdot k$ centers. Since $\varepsilon \in (0, 1]$, this means your solution uses potentially $2k$ clusters. However this is different than the formal problem description (see the Claims and Evidence section above for more elaboration). Could you please clarify this, in case I misunderstood something. Q2. Could you please elaborate on the update time of each algorithm? Your proposed method has low recourse, but I'm curious about the guarantees on the running time of each update. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the thorough review. Indeed, in the statement of Theorem 1.1, the result for the k-center is for every \eps\in(0,1/2). We will add this to the statement. Our algorithms are with 'resource augmentation' which is very common in competitive analysis (e.g. in caching, network routing, scheduling, see Chapter 4 of Beyond the Worst-Case Analysis of Algorithms by Tim Roughgarden for a survery), and Theorem 1.1 explicitly states this as an assumption. As reviewer 4 suggested, we will emphasize this more (also in the abstract). One may also think equivalently about resource augmentation as allowing our algorithm to use at most k centers, but comparing our solution to an all powerful offline solution (that knows the future), but is only allowed to use slightly less than k centers. As Reviewer 4 (Bgbi) notes, giving an algorithm that opens precisely k instead of O(k) centers is a worthy open question, but a mostly theoretical one: in practice the parameter k is usually selected heuristically, and users of such algorithms are often equally happy with a clustering with k' = O(k) clusters so long as it still explains the data well, doesn't overfit, etc. Finally, we have added experiments to our paper that show that in practice on our test instances, our k-center and k-median algorithms tends to open at most k centers anyway. About empirical comparison to prior work: in our comments to Reviewer 1 (iX2S), we argue that the fractional OPT against which we compare our algorithm is a the strongest possible dynamic benchmark (up to the constant slack factor of beta). About the update time: Our main concern in this paper is the recourse. Note that already the update time for maintaining the fractional solution (Bhattacharya et al. [FOCS 2023]) may be polynomial. A naive implementation of our rounding algorithm requires poly(n,k) update time, where n is the number of clients at the time step. We believe such an update can be done quite efficiently, but still at least in linear time. Designing an algorithm with competitive recourse and also fast update time is indeed an interesting distinct research question. --- Rebuttal Comment 1.1: Comment: Thank you for your response and for clarifying the significance of the theoretical results - please do emphasise the fact that your algorithms are bicriteria in the abstract and introduction. I still think the experiment section lacks a comparison to existing literature. I've updated my score accordingly.
Summary: This paper considers dynamic clustering problems, including dynamic k-center, facility location, and k-median. The goal is to maintain constant factor approximation with small recourse (the total changes made to the solution). All existing works in this problem obtain an absolute recourse guarantee, namely, they prove a bound on the total number of recourse. Authors observe that such worst-case bound often occurs in a pathological situation. However, in most natural cases, the optimal offline recourse can often be made much smaller. This motivates the study of algorithms with competitive recourse guarantees, which are popular goals in the dynamic setting. The authors design algorithms with competitive recourse for all three clustering problems studied. The competitive ratio is logarithmic in the metric spread and the number of candidate centers. Specifically, their algorithm maintains a bicriteria approximation solutions for $k$-center and $k$-median, and a constant factor approximation for facility location, both with logarithmic competitive ratio. Technically, the algorithm builds on a recent chasing positive body framework, with the development of certain rounding techniques to satisfy the recourse guarantees. Moreover, a lower bound is proved to show that the logarithmic competitive ratio is the best one can hope for, and experiments have been conducted to support the claims. Claims And Evidence: The claims have been proved rigorously. Methods And Evaluation Criteria: Yes Theoretical Claims: I have not checked all the proofs in the appendix. Experimental Designs Or Analyses: Authors have implemented their algorithms to verify their claims. Supplementary Material: I have not read the proofs in the appendix but have read the experimental part in the appendix. Relation To Broader Scientific Literature: I think the problem studied is important in algorithmic machine learning. The main results of this paper may be interesting to anyone interested in clustering. Essential References Not Discussed: No Other Strengths And Weaknesses: strength: the result is novel with theoretical depth. weakness: Although the experiment has been conducted, it is mostly for the aim of verifying the recourse guarantee. It might be good to test against some natural dynamic algorithms as benchmarks to show practical value. So far, I think the main value of this result is on the theory side. Other Comments Or Suggestions: I have not found significant typos. The paper is written well. Questions For Authors: What is the updated time of your algorithm? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the thorough and very positive review. In our comments to Reviewer 1 (iX2S), we argue that the fractional OPT against which we compare our algorithm is a the strongest possible dynamic benchmark (up to the constant slack factor of beta). See our comments to Reviewer 3 (G5B5) about the update time.
Summary: This paper studies fully dynamic clustering algorithms with competitive recourse guarantees. The authors focus on three classic clustering problems: k-center, facility location, and k-median. In the fully dynamic setting, given a metric space with n data points, a different set of points is chosen as clients at each time step. Then, the algorithm is required to maintain a sequence of clustering solutions over steps such that the clustering cost is approximately optimal in every step while minimizing the recourse, which is the total changes in the center set across steps. Let $\Delta$ be the aspect ratio of the metric space and $F$ be a subset of facilities in the space. For every $\beta \geq 1$, let $OPT_\beta$ be the optimal recourse of an offline solution that maintains $\beta$ approximation. They provide algorithms for these problems which satisfy: for every $\beta \geq 1$, and $\epsilon \in (0,1/2)$, - For k-center, it maintains an $O(\beta)$ approximate solutions with $(1+\epsilon)k$ centers while ensuring recourse at most $O(1/\epsilon^2 \cdot \log |F| \log \Delta)OPT_\beta$. - For facility location, it maintains an $O(\beta)$ approximate solutions while ensuring recourse at most $O_{\epsilon}(\log |F| \log \Delta)OPT_\beta$. - For facility location, it maintains an $O(\beta)$ approximate solutions with $O(k)$ centers while ensuring recourse at most $O_{\epsilon}(\log |F| \log \Delta)OPT_\beta$. They also provide the lower bound which shows that for any $\beta \geq 1$, any randomized algorithm that maintains an $O(\alpha \cdot \beta)$ solution requires recourse at least $\Omega(\min(\log |F|, \log_\alpha \Delta)) OPT_\beta$, moreover, for k-center and k-medians with even O(k) centers. They achieve the upper bound by leveraging the Positive Body Chasing framework, which provides fractional solutions that they round efficiently while preserving approximation and recourse guarantees. The paper presents both theoretical results with upper and lower bounds on recourse and experimental validation on UCI datasets, showing that the proposed methods significantly outperform the worst-case guarantees in practice. Claims And Evidence: Yes, the paper's claims are well-supported by both theoretical analysis and empirical results. They provide both upper bound and lower bound for the problems and run experiments to validate their algorithm on real-world datasets. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate. They utilize the Positive Body Chasing to get a fractional online solution, and then provide rounding techniques carefully designed to balance approximation quality and recourse minimization. The experiments use benchmark datasets from the UCI repository. The evaluation criteria are reasonable and well-justified, with an emphasis on both approximation quality and recourse efficiency. Theoretical Claims: Yes, I checked the proof of all upper bounds and lower bounds in the paper and appendix. While I only briefly checked the positive body chasing framework to achieve the fractional solution in appendix A. Experimental Designs Or Analyses: Yes, I checked the validity of the experiments in the appendix. Supplementary Material: Yes, I reviewed the supplementary material including the k-medians part, and lower bounds, and experiments. Relation To Broader Scientific Literature: The paper is well-connected to broad research topics including consistent algorithms, clustering algorithms, and online and dynamic algorithms. Essential References Not Discussed: No. The paper covers the necessary references. Other Strengths And Weaknesses: Strength: - The paper proposes these interesting fully-dynamic clustering problems. - The paper provides interesting algorithms and gives theoretical analysis of the approximation on clustering cost and the total recourse. - The paper also provides the lower bound on the recourse approximation factor for the fully dynamic clustering algorithm by constructing an interesting instance with HST metric. Other Comments Or Suggestions: Minors: - Line 282, Proof of Lemma 3.3, I think $OPT^t$ should be $r_j$. - Line 355, Proof of Theorem 3.4, there seems to be a missing sum over all time steps t = 0,1,..., T. - In Theorem 1.1, What is the range of the parameter $\epsilon$? Also, it seems from the later analysis, for facility location and k-medians, the approximation factor for recourse should have a dependence on $\epsilon$ as well? - In Remark 3.2 and 3.5, I can get the intuition of the analysis. But, if there is space, I think it would be better to include a detailed analysis for one of them in the main body? -Maybe some experiments can be included in the main body, one or two figures due to the remaining space in the paper? Questions For Authors: - In the experiments, you mainly test your algorithms, the fraction optimal solution, and the upper bound given by the analysis. Would it be interesting to compare your algorithm with some other traditional clustering algorithms which are static and do not care about the recourse at all (just recompute the solution at every time step)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the thorough and very positive review. We fixed the comments, and will try to include a more formal proof of the remark. In Theorem 1.1 in the facility location and k-median, we already lose a constant in the cost (or the number of servers). Hence, one may simply use \eps=1. We will add a comment on this. A static algorithm that does not care about recourse, and recomputes the solution from scratch at every time may have a very large recourse. Consider, for example, a dynamic 1-center problem in which there exists a single point that is a good center for all time steps. An offline algorithm that recomputes the solution may fail to detect this point and keep using a different point at every time step. We claim that the benchmark we use in our experiments is more stringent/informative. We compare against the fractional OPT (up to a constant slack factor beta which we set to 1.5); up to this slack factor, this is better than any offline benchmark. Furthermore, of all possible fractional OPT solutions, this is the one with the best recourse possible.
null
null
null
null
null
null
Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-Tuning
Accept (poster)
Summary: Incorporating an invariance objective into unlearning tasks can prevent accidentally relearning unlearned WMDP when trained on standard fine-tuning datasets. Claims And Evidence: While there is some evidence that the method is robust against relearning using unrelated fine-tuning tasks, I don’t think there is enough evidence across other models and unlearning settings and relearning attacks using WMDP itself to be able to say for sure that this is a significant contribution especially when other popular unlearning baselines designed to solves this problem are neglected. Methods And Evaluation Criteria: The authors add a gradient norm penalty to enhance previously existing unlearning methods. It makes sense that this works because the unlearned model is now much more stable in the loss landscape of the dataset used for the grad norm penalty. This is a valuable way to enhance previous methods and in the limited setting the authors do evaluate show that it is a promising approach. The robust accuracy for unlearning is an interesting evaluation criteria. The evaluations that were performed make sense and are sound. Theoretical Claims: No derivation is provided for (3), it is unclear why (4) and then (5) serves as a reasonable approximation that satisfies (2). Please provide the full derivation and proof of optimality of the approximations. I’m not sure what non-stationarity and (269) stationarity are meant to refer to. Section 5 is speculation without theoretical evidence. It isn’t clear to me without proof that “fine-tuning direction resides in the ununlearning space” and “we expect the unlearning direction to be opposite to the fine-tuning direction.” While Illharco et al 2023 is a nice empirical work, *linear* task directions are an unproven conjecture so the analysis doesn’t have much solid ground to go off. I recognize the empirical illustration with actual measurements but I am not convinced without a more robust experiment (variety of ft and unlearning datasets as well as more algorithms) to validate the conjecture and without theoretical justification. Even were this the case, I would hesitate to say its what explains ILU is more resilient compared to NPO. For example, an alternative explanation could be that the gradient regularization performed in (5) results in converging to a gradient 0 point i.e. trapped at a fixed point which means that gradient descent isn’t likely to succeed. Finally, I find the motivation from IRM to be confusing and not well motivated. It isn’t clear to me what use this framework has when we (A) we don’t actually end up needed (multi) invariance to many datasets, (B) the advantage of IRM is when we have w be a model that uses the invariant theta but this isn’t done as w=1 (C) invariance during prediction (what IRM is used for) and invariance during training are two very different things: I think the claim that we can link invariance to robustness against fine-tuning isn’t shown. It isn’t clear to me why IRM should be used over a more simpler motivation from gradient norm. Experimental Designs Or Analyses: There are several flaws in this work regarding experimental design related to a limited evaluation. First, even though TAR [1] is mentioned several times in the paper as a solution that was designed to provide robust fine-tuning it is neglected as a baseline. [2] also is a method that tries to treat this issue. [3] has provided a method for this as well though not in unlearning so I don’t expect the authors to add (though [2] did use it). For fair evaluation I ask that the authors select baselines that were designed to address the issue of robust unlearning. [1] Tamirisa, R., Bharathi, B., Phan, L., Zhou, A., Gatti, A., Suresh, T., ... & Mazeika, M. (2024). Tamper-resistant safeguards for open-weight llms. [2] Gandikota, R., Feucht, S., Marks, S., & Bau, D. (2024). Erasing conceptual knowledge from language models. [3] Rosati, D., Wehner, J., Williams, K., Bartoszcze, L., & Gonzales, R. carsten maple, Subhabrata Majumdar, Hassan Sajjad, and Frank Rudzicz.(2024). Representation noising: A defence mechanism against harmful finetuning Second, even though the authors claim that WMDP is more realistic setting than TOFU, MUSE, or the harry potter dataset - It is still important to provide at least some other datasets to demonstrate the reliability of the newly proposed method. So I ask that at least one other unlearning dataset is added. Third, although the relearning attack is discussed and in my reading of “Robustness challenge in LLM unlearning” is made a focal point of unlearning robustness, the results of this attack are not presented. Please present the results of this attack as it seems critical to being able to evaluate the utility of this method. Fourth, I am concerned that only one model is used to demonstrate the efficacy of the method.I Supplementary Material: I have reviewed the appendicies Relation To Broader Scientific Literature: While unlearning had its moment last year when it was adapted to the LLM setting from computer vision, there is currently disillusionment about the utility of unlearning while it remains vulnerable to easily performed attacks and even by accident. This work provides a valuable unique method that supplements other on going works attempting to make unlearning robust. Essential References Not Discussed: I have mentioned these references earlier. Other Strengths And Weaknesses: To be clear, I find the actual method itself to be well validated within the scope of the experiments done. I also think this is valuable contribution and extensions. I just can’t recommend for acceptance until the reliability of the method is shown. Other Comments Or Suggestions: “ununlearning” is very confusing. I think in the literature relearning is preferred. A.2. Which optimizer is used and what scheduler? I think this can matter a lot for robust unlearning (see [1]) [1] Deng, J., Pang, S., Chen, Y., Xia, L., Bai, Y., Weng, H., & Xu, W. (2024, May). Sophon: Non-fine-tunable learning to restrain task transferability for pre-trained models. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Derivation on (3), connection with (4)-(5), question on stationarity, motivation on IRM, and why not just gradient norm.** **A1:** Eq. (3) follows the standard IRMv1 relaxation from [R1], which approximates the bi-level IRM in Eq. (2) using a single-level gradient penalty. This promotes invariance across environments by encouraging stationarity (i.e., local optimality) of the shared predictor $\mathbf{w}$ across datasets $\mathcal{D}_i$. For a complete derivation, see Section 3.1 of [R1]. Stationarity means that $\nabla_w \ell_i(w \circ \theta) \approx 0$ for all $i$, implying $w=1$ is optimal across environments. The $w=1$ can be fixed as a scalar without loss of generality, as shown in [R1]. Our ILU objective (Eqs. 4–5) extends Eq. (3) to LLM unlearning by replacing empirical risk with unlearning loss and treating $\mathcal{D}_i$ as fine-tuning environments. The gradient norm penalty thus ensures that $\boldsymbol{\theta}$ remains locally optimal across tasks, making it robust to further fine-tuning. Hence, using the gradient norm is not arbitrary—it directly follows from IRM and supports the desired invariance. > [R1] arXiv:1907.02893. **Q2: Concerns about the lack of theoretical support in Section 5 and the speculative nature of linear task direction analysis.** **A2:** First, we admit that our task vector analysis is not a theoretical proof, but we believe it is grounded in practical relevance. In Lines 277–280, the task vector represents the direction from the fine-tuned model to the pre-trained model. We do not claim this as the precise gradient direction (i.e., no linear assumption is imposed). The use of task vectors to investigate task-wise learning direction geometry has been applied in non-linear cases [R2-4]. Even under a linear assumption, the work in [R3] demonstrates that this is a valid approach for practical task vectors ([Eq. (3), R3]). Additionally, recent work [R4] provides a theoretical proof for the general task vector in model editing tasks. > [R2] arXiv:2212.04089 > [R3] arXiv:2305.12827 > [R4] https://openreview.net/forum?id=vRvVVb0NAz **Q3: Gradient regularization performed in (5) results in converging to a gradient 0.** **A3:** As addressed in Q1, the gradient norm is defined based on the fine-tuning datasets and aims to promote the universal optimality of unlearned model on these datasets, originating from IRM. As a result, ILU does not hinder fine-tuning success (i.e., it avoids getting trapped in a poor local optimum), as demonstrated in Table 2 and Fig. 5. This is further confirmed by the task vector analysis, where ILU shows better alignment between pre- and post-fine-tuning directions, while NPO causes conflicts, as showed in Q2. **Q4: We don’t actually end up needing (multi) invariance to many datasets.** **A4:** For details on why ILU stays effective and does not require (multi) invariance to many datasets, please refer to Q4 from [Reviewer yxbz](https://openreview.net/forum?id=x2lm33kdrZ&noteId=NjDB2xVUnJ). **Q5: Invariance during prediction and invariance during training are two things.** **A5:** Thank you for the comment. We believe there exists a misunderstanding of IRM. We agree that prediction-time invariance and training-time robustness are conceptually different. However, in IRM, **training-time invariance** is the mechanism used to achieve prediction-time invariance. Specifically, IRM aims to learn a representation $\phi$ such that a single predictor $w$ performs optimally across environments (as formulated by lower-level training problem over $w$ in Eq. 2). This is enforced during training by penalizing the gradient of the environment-specific loss with respect to $w$, ensuring $w$ remains stable across environments. Our work extends this training-time mechanism to promote parameter-level invariance in the unlearned model $\theta$. **Q6: Several flaws in this work regarding experimental design.** **A6:** We are happy to add more experiments to strengthen our work; However, we respectfully disagree with the criticism that our current experimental design has "flaws." First, we have added an experiment comparing ILU with TAR (**[Table R3](https://ibb.co/4ZSkRQGh)**). Second, we discuss the suggested works [2] ("Erasing Conceptual Knowledge from Language Models") and [3] ("Representation Noising: A Defense Mechanism Against Harmful Fine-tuning"), however, we note that these works are quite different from ours. Specifically, [2] investigates robustness against input-level jailbreaking attacks, whereas our focus is on model-based fine-tuning. Third, we evaluated robustness against the relearning attack (**[Table R2](https://ibb.co/DHd5Zw2c)**), Fourth, we extended experiments to the MUSE dataset (**[Table R4](https://ibb.co/m5Rtqc3t)**) Lastly, we confirmed ILU's effectiveness across different LLMs and datasets; See our detailed responses from [Reviewer b9VT](https://openreview.net/forum?id=x2lm33kdrZ&noteId=aitszercwQ)’s Q1–Q3. --- Rebuttal Comment 1.1: Comment: Re:A1 - Thanks for this, I was missing the connection to stationarity across losses, optimality makes sense now! I really appreciate the effort put into A6. Largely my concerns over external validity and reliability of the study are satisfied. I am changing my score to a weak accept. As with other reviewers, I am not sure I am comfortable with a higher score without seeing more effort for expanding reliability and external validity of the study, there exist more unlearning methods e.g. listed in [1] and datasets that are still not evaluated. To make this more concrete seeing one or two more methods and evaluation on TOFU would satisfy me and I ask the authors add these in the camera ready. [1] Che, Z., Casper, S., Kirk, R., Satheesh, A., Slocum, S., McKinney, L. E., ... & Hadfield-Menell, D. (2025). Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities. I will repeat for the sake of the Area Chair that I do think that this is a valuable contribution and despite my concerns about the method still not rigourously evaluated the work is quite good. --- Reply to Comment 1.1.1: Comment: Thank you very much for recognizing our rebuttal efforts and raising the score. We also appreciate you pointing out [1] (a concurrent work, available on Feb. 2025) and suggesting we check other robust unlearning methods used in [1]. Per your suggestion, we carefully checked [1], which focuses on unlearning robustness evaluation on WMDP. The first thing we want to note is that TAR appears to be the strongest unlearning method in [1]. We already compared TAR with our approach in our earlier response, showing similar robustness gains but with significantly improved computational efficiency (see our previous response to [Q6](https://openreview.net/forum?id=x2lm33kdrZ&noteId=iyqcrAlefc) and response to [b9VT’s Q2](https://openreview.net/forum?id=x2lm33kdrZ&noteId=jtqu4A1bls)). Additionally, thank you for encouraging us to add TOFU experiments in the camera-ready version; We will make our best effort to do so. We value this suggestion, but we would like to respectfully note that the MUSE dataset used in our rebuttal shares a similar spirit to TOFU. Both aim to remove "factual" information from the forget set (fictional author information in TOFU, and copyrighted information in MUSE). We preferred MUSE to TOFU in the rebuttal because TOFU uses a p-value-based forget quality metric to evaluate unlearning effectiveness, which may not be very precise unless the p-value is close to 1 (good unlearning) or less than 0.05 (poor unlearning). A p-value greater than 0.05 (but away from 1) is not a precise metric for indicating “good” unlearning quality. Thank you once again for recognizing the valuable contribution of our paper and recommending it to the AC. We truly appreciate your thoughtful feedback and the time you’ve dedicated to reviewing our work!
Summary: The paper introduces a novel approach to enhance the resilience of LLMs against the re-emergence of unlearned knowledge during downstream fine-tuning tasks. This is achieved through invariant LLM unlearning, which incorporates IRM principles into the unlearning process. The contributions of this paper are summarised as follows: LLM Unlearning Challenges: Current unlearning methods are effective but typically fail when the model undergoes downstream fine-tuning. This can inadvertently recover the knowledge that was supposed to be unlearned, posing a significant challenge in scenarios where LLMs are fine-tuned for new tasks post-unlearning. Invariant Risk Minimization: The paper leverages IRM, a technique used for training models to perform well across varied environments, to ensure that the unlearning remains robust even after subsequent model fine-tuning. IRM helps enforce that the model's predictions remain invariant, regardless of changes in data distribution or task. Invariant LLM Unlearning: ILU is proposed as a new framework that integrates IRM into the unlearning process. This approach aims to make the erased knowledge robustly inaccessible, even if the model is later adapted or fine-tuned for other tasks. Empirical Validation: The effectiveness of ILU is demonstrated through extensive experiments on the WMDP benchmark, which focuses on removing harmful knowledge from LLMs. ILU significantly outperforms existing state-of-the-art unlearning methods like NPO and RMU, particularly in maintaining unlearning robustness across various fine-tuning scenarios. Task Vector Analysis: The paper also introduces a task vector analysis to further understand how ILU maintains robustness. This analysis shows that ILU helps align the unlearning direction in a way that resists shifts caused by downstream fine-tuning, essentially preventing the relearning of unwanted knowledge. This work not only advances the field of machine learning by addressing a significant shortcoming in LLM unlearning but also sets a foundational approach for future research into creating more resilient machine learning models against dynamic operational environments. Claims And Evidence: I think the authors do not fully explain why the scenario of relearning occurs, it can also be the reason why the parts to motivate ILU at the beginning in Section 4 seems weak to me. I agree that NPO and RMU have the risks of relearning, but it may be mainly due to their under-unlearning. I am not sure if using GA can also lead to the scenario of relearning. Also, I agree that GA will lead to over-unlearning and the deterioration of retention, but I think you can consider some of its improved version, such as task vector, and wga [1]. [1] Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond. I am not sure the operation ⊥ is well defined and if we can formally claim that GSM8K is completely orthogonal to WMDP. From my understanding, using the gradient direction to understand the influence on model performance works under linear assumption, which seems to correct only if the magnitude of \tau is quite small. So, though quite interesting, I am not sure if the analysis in Section 5 is solid enough. Methods And Evaluation Criteria: I think the explanations about why IRM works for LLM unlearning should be discussed more. Although the authors have shown some good properties of IRM in Section 5, there remains an open question about why it happens. Forgive me if I am incorrect, it seems that the proposed method tries to find a solution in a relatively smooth loss landscape, or maybe IRM is trying to find a solution in a disentangled parameter space. Also, the reason why using a single fine-tuning datasets and even the original unlearning dataset can still be useful should be explained more, especially for the latter one. Some analysis and explanations may be required here. Theoretical Claims: Seemingly not applied. Experimental Designs Or Analyses: It seems that the empirical results are limited to WMDP. More results on TOFU and MUSE can obviously make the results more solid, but I think it is proper for now in conducting experiments only on WMDP. Moreover, the baseline methods are limited to RMU and NPO, where more methods should be included to make the results more solid. Some hyper-parameter sensitivity analysis and statistical report (e.g., std) may also be meaningful. Supplementary Material: Good enough. Relation To Broader Scientific Literature: This work enhances the resilience and robustness of LLM unlearning algorithms, which are known to be effective in alleviating or mitigating various negative societal impacts associated with the widespread use of LLMs. These impacts include, but are not limited to, the aspects of avoiding copyright issues and preventing harmful content generation. Essential References Not Discussed: Good Enough. Other Strengths And Weaknesses: I raise no other comments. Other Comments Or Suggestions: I raise no other comments. Questions For Authors: Please see the questions above. I think somehow the paper is quite interesting, I will raise my scores if the authors can answer the questions above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the reviewer’s review. We respond to the key questions raised below. **Q1: Weak motivation for why relearning occurs (in Sec. 4) and under-unlearning in NPO and RMU?** **A1:** Thank you for the question. We provide further clarification below. First, we added experiments showing that WGA, is similarly vulnerable to downstream fine-tuning (**[Table R5](https://ibb.co/qMgNgNrm)**). After fine-tuning, both RMU and WGA risk relearning across various datasets. Second, we remark that it may not be fully accurate to categorize NPO and RMU as under-unlearning. For instance, NPO over-unlearns in MUSE ([Table 3, https://arxiv.org/pdf/2407.06460]). And we focus on these methods since they remain SOTAs in the WMDP benchmark. Third, the motivation for relearning risk was introduced in Sec. 3 and Fig. 1, where we argue that fine-tuning may conflict with the optimized unlearning direction, potentially reversing its effect. In Sec. 4, we aim to align the unlearning direction with the fine-tuning direction by introducing the concept of IRM, treating fine-tuning as a different training "environment" the unlearned model must adapt to; see Lines 191-219. This also motivates the task vector analysis in Sec. 5 to study the relationship between unlearning and fine-tuning directions. **Q2: Definition of the operation ⊥? Is GSM8K completely orthogonal to WMDP? Gradient direction only works under a linear assumption?** **A2:** Thank you for your questions. First, the operation ⊥ in Sec. 5 is defined by a cosine value of 0 for the angle between two task vectors (Line 293). We use ⊥ to indicate irrelevance between the fine-tuning task and the unlearning task (Line 249). Second, we did not claim that GSM8K is fully orthogonal to WMDP; rather, it is an irrelevant fine-tuning task for WMDP unlearning. As shown in Fig. 4, the nearly 90-degree angle between the GSM8K vector ($\tau_\mathrm{ft}$) and the WMDP vector ($\tau_\mathrm{ILU}$) demonstrates minimal conflict. Third, our analysis in Sec. 5 is based on the task vector, which reflects the direction from the fine-tuned model to the pre-trained model. We do not claim this as the precise gradient direction (i.e., no linear assumption is imposed). The use of task vectors to investigate task-wise learning direction geometry has also been applied in non-linear cases [R1-3]. Even under a linear assumption, the work in [R2] shows that this is a valid approach for practical task vectors (e.g., [Eq. (3), R2]). > [R1] arXiv:2212.04089 > [R2] arXiv:2305.12827 > [R3] https://openreview.net/forum?id=vRvVVb0NAz **Q3: Further explanation on why IRM works for LLM unlearning, a solution in a relatively smooth loss landscape, or a solution in a disentangled parameter space?** **A3:** Thank you for this insightful question. We conducted experiments to visualize the forget loss landscape of IRM using the visualization method in https://arxiv.org/pdf/1712.09913, with results presented in **[Figure R6](https://ibb.co/qLXDgVfJ)**. As shown, applying IRM does not smooth the model's forget loss landscape. This is expected, as the invariance regularization (i.e., the gradient norm term in Eq. (4)) is defined over the fine-tuning datasets rather than the forget data. Its purpose is not to smoothen the unlearning optimization (i.e., the upper-level problem in Eq. (2)), but to ensure that the unlearned model $\boldsymbol \phi$ satisfies the optimality conditions for fine-tuning, i.e., the lower-level constraints in Eq. (2) (Lines 166-169). **Q4: Why using a single fine-tuning dataset and even the original unlearning dataset can still be useful should be explained more?.** **A4:** Thanks for your question. First, the better performance with a single fine-tuning dataset can be attributed to the IRM-type optimization difficulty in Eq. (2). As shown, using more fine-tuning datasets ($\mathcal D_i$) increases the challenge of achieving a lower-level solution by creating a more restricted constraint space (to satisfy multiple fine-tuning task optimalities simultaneously). This complicates the unlearning optimization in Eq. (4), making it harder to achieve stationarity across all fine-tuning tasks. Second, using the original unlearning dataset (Lines 220–228) remains valuable, especially under the relearning attack[R4], where forget data samples are used for fine-tuning. Still, this may not sufficiently improve robustness for other fine-tuning tasks (Lines 261–270). > [R4] arXiv:2406.13356 **Q5: Additional datasets and baselines, and other suggestions.** **A5:** First, we added results on the MUSE dataset and compared ILU with TAR; see Q3 and Q2 from [Reviewer b9VT](https://openreview.net/forum?id=x2lm33kdrZ&noteId=aitszercwQ). Second, Fig. 6 shows our main hyperparameter sensitivity study. We plan to extend this to the optimizer in the revision and include statistical reporting. Notably, Fig. 5 demonstrates that our robustness performance consistently surpasses the baselines. --- Rebuttal Comment 1.1: Comment: For Q3, it remains unclear why Eq. 2 can lead to both improved unlearning and retention. Also, from my knowledge, IRM works well when the data during training and test at least have some perspectives in common. The reason why IRM can maintain performance on completely orthogonal task is hard to me to understand. For Q4, the authors only mentioned the reason for their design choice yet did not discuss why it can lead to better results. Also from the theoretical analysis of IRM, it try to find some invariant features across datasets. If only one retain dataset is adopted, the model may wrongly select variant features and thereby leading to overfitting. Also, the orthogonal assumption somewhat violates the basic assumption within IRM. So, I think some new theoretical analysis are required considered the completely different goals between original IRM works and you do. --- Reply to Comment 1.1.1: Comment: **Q1: Additional questions on Q3.** **Response:** Thank you for raising these valuable questions. Below is further clarification: **(R1-1)** Both unlearning (upper-level, i.e., forget loss minimization) and retention (lower-level, i.e., fine-tuning loss minimization) are captured by the couped bi-level optimization problem Eq. (2). The lower-level solution, $\mathbf{w}^*(\boldsymbol{\theta})$, guides unlearning conduted at upper level while maintaining optimality for fine-tuning task. Empirically, we find that a single, unrelated fine-tuning task generalizes well for unlearning invariance across tasks, as shown in the task vector geometry analysis in **[Table R1](https://ibb.co/m5fz1V2w)** (extended from Fig. 4). This is likely because other fine-tuning tasks are similarly unrelated to the unlearning task, making their influence comparable. Thus, the invariance captured via one such task is sufficient to guide the bi-level optimization effectively. **(R1-2)** At first glance, training and testing data in IRM need to share common elements, such as digit objects in Colored MNIST (Table 2, https://arxiv.org/pdf/2303.02343). However, conventional IRM mainly addresses supervised classification, where training/testing environments may contain spurious correlations between non-core features (e.g., digit color like Red or Blue) and classification labels (e.g., group IDs). These spurious correlations can even be **opposite** across environments. For instance, one environment may associate Red-colored digits with group ID 1, while another flips this, associating group ID 1 with Blue-colored digits. The differing spurious correlations between training and testing environments in conventional IRM make finding invariance more challenging, even when environments share common digits. The reviewer seems to infer that the unlearning dataset (WMDP) and fine-tuning dataset (GSM8K) are "orthogonal," making IRM a challenging task. We make the following points: a. Based on our earlier discussion of flipped spurious correlations between data features and labels, we believe negative correlations between environments make IRM harder to find invariance, compared to orthogonality, as they introduce conflicts even if the environments share some common aspects. b. As explained in (R1-1), invariance optimization in the unlearned model is governed by the coupled bi-level optimization foundation. While the globally optimal solution may not be achievable with orthogonal unlearning and fine-tuning tasks, a good (local) solution is still possible, as shown by our WMDP/MUSE experiments and task vector geometry validation in Fig. 4. While IRM works in its original setting, it can also be extended to unlearning and fine-tuning by following its bi-level optimization foundation. **Q2: Additional questions for Q4.** **Response:** Apologies for the confusion; it seems our key point wasn’t well delivered in the original response. **(R2-1)** Using multiple fine-tuning sets to promote invariance in Eq. (2) **may over-constrain** the lower-level solution space, making it harder to find a good solution and increasing the complexity of the bi-level optimization. To validate this, [Figure R7](https://ibb.co/0p8wdPng) shows that using multiple fine-tuning sets for stationarity regularization (gradient norm regularization in Eq. (4)) makes optimization more difficult, as indicated by the higher loss compared to using a single fine-tuning set over optimization steps. **(R2-2)** While our motivation stems from IRM for invariance across "multiple environments" (R1-2), the more general rationale lies in its bi-level optimization foundation (R1-1) to align unlearning (upper-level optimization) with fine-tuning optimality (lower-level optimization), reducing conflicts. Even using a single, unrelated retain dataset (e.g., GSM8K), the key is leveraging fine-tuning signals to guide the unlearning-finetuning relationship, minimizing fine-tuning’s negative impact. Instead of the reviewer's overfitting hypothesis, we show that such a dataset has effectively regularized the unlearning process, as also discussed in R1-1. **Q3: Orthogonal assumption violates IRM’s basic assumption within IRM.** **Response:** As clarified in R1 and R2-2, our proposal extends the original IRM setting while following its bi-level optimization foundation, ensuring unlearning remains robust to fine-tuning’s influence on the unlearned model. The orthogonality assumption applies between the optimized unlearning and fine-tuning tasks (see Line 218, Sec. 5, and Fig. 4), not between training and test-time fine-tuning environments. We feel this orthogonality does not violate IRM's basic assumption, as explained in R1-2. As shown in **[Table R1](https://ibb.co/m5fz1V2w)**, the ILU-derived unlearning task vector indeed remains nearly orthogonal to different test-time fine-tuning task vectors, which differ from the GSM8K task used during training.
Summary: The authors propose a novel method to enhance the robustness of language model unlearning against fine-tuning. The core contribution is the introduction of invariance regularization, inspired by Invariant Risk Minimization, which aims to make unlearning effects resilient to subsequent fine-tuning. The paper demonstrates that existing unlearning methods often lose their efficacy after even minimal fine-tuning, and provides both theoretical analysis and empirical evidence that the proposed regularization significantly improves robustness. Claims And Evidence: Yes Methods And Evaluation Criteria: The evaluation focuses exclusively on benign fine-tuning with clean data. However, in adversarial settings, more aggressive approaches (like targeted relearning attacks) might be employed to reverse unlearning effects. The paper would be stronger if it evaluated against such stronger attack scenarios to demonstrate the limits and capabilities of the proposed method. Theoretical Claims: Yes, I checked. There are no issues. Experimental Designs Or Analyses: Yes, I checked. There are no issues. Supplementary Material: Yes, I review two parts in the supplement. Relation To Broader Scientific Literature: The paper addresses an important but under-explored aspect of machine unlearning - maintaining unlearning effects after model fine-tuning. Essential References Not Discussed: To best of my knowledge, there are no essential related works missed. Other Strengths And Weaknesses: Strengths: (1) The paper addresses an important but under-explored aspect of machine unlearning - maintaining unlearning effects after model fine-tuning. As language models continue to be deployed, updated, and fine-tuned in practice, ensuring the persistence of unlearning is a critical safety concern. The framing of this problem is novel and timely. (2) The task vector analysis provided to explain why the method works is effective and insightful. By connecting the geometry of task vectors between unlearning and fine-tuning, the authors provide substantive evidence for why invariance regularization prevents the "forgetting of forgetting" phenomenon. This analysis helps clarify the underlying mechanism of the proposed solution. (3) The paper is well-written and logically structured. The authors clearly articulate the problem, motivate their approach, and present results in an accessible manner. The figures effectively illustrate the key concepts and findings, making the technical content more approachable. Weaknesses: (1) My primary concern is the absence of comparisons with existing methods designed to enhance unlearning robustness. Particularly, works like "Tamper-resistant safeguards" have been proposed specifically to make unlearning methods more robust. Without direct comparisons to such baselines, it is difficult to assess the relative advantages of the proposed approach. This omission significantly limits the strength of the empirical claims. (2) The evaluation focuses exclusively on benign fine-tuning with clean data. However, in adversarial settings, more aggressive approaches (like targeted relearning attacks) might be employed to reverse unlearning effects. The paper would be stronger if it evaluated against such stronger attack scenarios to demonstrate the limits and capabilities of the proposed method. (3) The empirical evaluation is conducted on a single language model architecture. Given the significant architectural differences among modern LLMs (Llama, Qwen, etc.), it would strengthen the paper to demonstrate that the benefits of invariance regularization generalize across different model families and sizes. Other Comments Or Suggestions: No. Questions For Authors: I will change my evaluation if the following questions could be solved: (1) Could you provide the comparison and analysis of other baselines like "Tamper-Resistant Safeguards for Open-Weight LLMs". (2) The paper would be stronger if it evaluated against stronger attack scenarios to demonstrate the limits and capabilities of the proposed method. (3) If experiments could be conducted on more LLMs, it would help demonstrate the scalability of the proposed method. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Robust unlearning evaluation on aggressive approaches (like targeted relearning attacks).** **A1:** We appreciate the reviewer’s suggestion and have incorporated additional experiments to evaluate our method under the **relearning attack** setting [R1]. Specifically, the attack involves fine-tuning the unlearned model for one epoch using a small subset of **forget samples** (e.g., 60 samples in our experiments). **[Table R2](https://ibb.co/DHd5Zw2c)** shows the forget quality (FQ) metric before and after the relearning attack for **ILU** methods, compared to **NPO**, **RMU**, and **TAR** (which uses meta-learning for robustness against harmful fine-tuning) [R2]. Note that TAR was implemented on the LLaMA3-8B model, and we maintained a consistent experimental setup. As shown in **[Table R2](https://ibb.co/DHd5Zw2c)**, ILU-based methods demonstrate significantly stronger robustness against the relearning attack compared to the baseline NPO and RMU, as evidenced by the much smaller FQ drop after the attack. The performance of the new baseline TAR is also included. As seen, ILU and TAR exhibit nearly identical robustness, with only a small FQ drop of 0.04–0.05. However, as discussed in Lines 55–61, we did not use TAR due to its significantly high computational cost. We defer the discussion of TAR's computational limitations to the next question. > [R1] Hu, Shengyuan, et al. "Unlearning or Obfuscating? Jogging the Memory of Unlearned LLMs via Benign Relearning." to appear in ICLR’25. > [R2] Tamirisa, Rishub, et al. "Tamper-resistant safeguards for open-weight llms." arXiv preprint arXiv:2408.00761 (2024). **Q2: Comparison with "Tamper-Resistant Safeguards for Open-Weight LLMs".** **A2:** Thank you for the suggestion. In the rebuttal, we included **TAR** as an additional baseline, which is based on the NPO loss, uses **LLaMA3-8B-Instruct** as the base model, and achieves model robustness through a **meta-learning** approach. To ensure a fair comparison, we also developed NPO+ILU(GSM8K) on the same LLaMA3-8B-Instruct model and compared their performances. As shown in **[Table R3](https://ibb.co/4ZSkRQGh)**, both TAR and our proposed ILU achieve similar unlearning performance before and after downstream fine-tuning. For the average robust accuracy (RA), ILU decreases RA by 0.03 post fine-tuning, slightly more than TAR’s decrease of 0.02. Consistent robustness performance was also observed in **[Table R2](https://ibb.co/DHd5Zw2c)** against the relearning attack. However, TAR introduces significantly higher computational overhead compared to ILU. **[Table R3](https://ibb.co/4ZSkRQGh)** also measures the total training time for both methods. We observe that TAR, being based on meta-learning, incurs much higher time consumption due to the multi-step gradient unrolling required for robustness against fine-tuning. Specifically, TAR takes 7,441.9 minutes, while ILU only requires 118.2 minutes, achieving the same performance and being **63 times faster than TAR**. The above results highlight that ILU not only achieves highly competitive unlearning robustness but also significantly improves efficiency, making it a more practical choice for LLM unlearning. **Q3: Additional experiments on different LLMs.** **A3:** We understand the reviewer’s concern regarding the **model choice** and the **consistent effectiveness** of our proposal. To address this, we demonstrate the performance of **ILU** on the larger **LLaMA3-8B-Instruct model**, comparing it with **TAR** (**[Table R3](https://ibb.co/4ZSkRQGh)** and **[Table R2](https://ibb.co/DHd5Zw2c)**). Additionally, we extended our experiments to the **MUSE LLM unlearning benchmark** (https://arxiv.org/abs/2407.06460), which includes: (a) **MUSE-News**, based on BBC News, using the **LLaMA-2 7B** model as the target, and (b) **MUSE-Books**, based on Harry Potter books, using the **ICLM-7B** model as the target. **[Table R4](https://ibb.co/m5Rtqc3t)** compares the performance of ILU with NPO, where forget quality (FQ) is measured by KnowMem and VerbMem on $D_f$, along with fine-tune accuracy (FA) after fine-tuning. As seen, before fine-tuning, both NPO and NPO+ILU (GSM8K) demonstrate strong unlearning effectiveness across MUSE-News and MUSE-Books. After fine-tuning, the unlearning performance (KnowMem and VerbMem on $D_f$) is largely preserved for ILU (staying at low values), while NPO shows a clear increase in these metrics post-fine-tuning.
Summary: This paper addresses the challenge of machine unlearning in large language models (LLMs) by improving the robustness of removing targeted knowledge while preserving model utility. Existing unlearning methods are highly sensitive to downstream fine-tuning, often leading to the unintended recovery of unlearned information, even when the fine-tuning task is unrelated. To address this, the authors introduce invariance regularization inspired by Invariant Risk Minimization (IRM) and propose a new framework called Invariant LLM Unlearning (ILU), which enhances resistance to fine-tuning-induced recovery. Extensive experiments on the WMDP benchmark show that ILU significantly outperforms state-of-the-art unlearning methods, such as negative preference optimization (NPO) and representation misdirection for unlearning (RMU). Claims And Evidence: The paper claims that unlearned concepts can be recovered by finetuning on an irrelevant dataset. The claim is supported by the prior work as well as experiments shown in this work. The paper proposes an invariant unlearning approach built on invariant risk minimization. The performance of the proposed method is supported with experiments in the paper. Overall, I think the claims in this work are well-supported by experiments and prior research. Methods And Evaluation Criteria: The proposed unlearning method based on IRM is sound. Evaluation criteria on unlearning accuracy and finetuning accuracy make sense. Theoretical Claims: This work does not have any theoretical claim. Experimental Designs Or Analyses: Experiment on six finetuning datasets and WMDP is appropriate. Supplementary Material: I have reviewed the supplementary material. Relation To Broader Scientific Literature: This work leverages the finding in prior literature that unlearning is sensitive to finetuning and adversarial recovery. The proposed method is based on IRM. Essential References Not Discussed: The paper has properly cited the related works. Other Strengths And Weaknesses: As mentioned above. Other Comments Or Suggestions: The task vector results are somewhat confusing. The difference seems not significant (0.16 vs 0.09). I am not sure if the task vector analysis is averaged or just for a single sample. The author should explain the significance of the improved cosine metric as well as incorporate more data samples & tasks to ensure the generality of this study. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed summary of our work and contributions. We also appreciate the insightful comments and provide our detailed responses below. **Q1: Confusion in Fig. 4 task vector.** **A1:** We apologize for any confusion caused in Fig. 4 and our presentation. We will improve them in the revision. The difference between the two angles (or cosine values) shown in Fig. 4 is not intended to explain the robustness gain of ILU over NPO. Instead, these angles serve to validate our geometry plots and are later used in our analysis (between Lines 318-Left and Line 285-Right in our submission). In summary, the sign flip (Line 281-Right) when comparing $ \cos(\angle (\tau_{\text{NPO} \to \text{ft}}, \tau_{\text{NPO}})) = -0.41 < 0$ and $ \cos(\angle (\tau_{\text{ILU} \to \text{ft}}, \tau_{\text{ILU}})) = 0.09 > 0$ demonstrates that NPO post-fine-tuning results in a much larger deviation from its original unlearning direction (an obtuse angle), whereas ILU remains near orthogonal, better disentangling the fine-tuning effect from the original unlearning and preserving the unlearning direction within the unlearning space after fine-tuning, as shown by our geometric validation in Fig. 4. **Q2: On the significance of task vector analysis for more samples/tasks.** **A2:** The task vector is defined as the difference between the fine-tuned/unlearned model and its base model over the fine-tuning dataset, as illustrated in Lines 278-287. Therefore, it depends on the entire fine-tuning task rather than a single sample. To better highlight the significance of our analysis, we follow the reviewer’s suggestion to conduct a task vector analysis across additional fine-tuning tasks within the NPO-based unlearning context. See results in **[Table R1](https://ibb.co/m5fz1V2w)**, which reported two metrics: $ \cos(\angle (\tau_{\text{NPO} \to \text{ft}}, \tau_{\text{NPO}}))$ and $ \cos(\angle (\tau_{\text{ILU} \to \text{ft}}, \tau_{\text{ILU}}))$, as clarified earlier in Fig. 4. A smaller negative value for the former $ \cos(\angle (\tau_{\text{NPO} \to \text{ft}}, \tau_{\text{NPO}}))$ indicates that the fine-tuning task vector forms a larger obtuse angle with the unlearning task vector, implying greater conflict between the two. In contrast, the cosine value closer to 0 for the latter $ \cos(\angle (\tau_{\text{ILU} \to \text{ft}}, \tau_{\text{ILU}}))$ demonstrates the effectiveness of our method, as it has less conflict between the fine-tuning and unlearning directions.
null
null
null
null
null
null
Heads up! Large Language Models Can Perform Tasks Without Your Instruction via Selective Attention Head Masking
Accept (poster)
Summary: This article explores the ability of large language models (LLMs) to perform tasks without relying on explicit instructions through selective attention head masks. It is found that there exists a "functional path" composed of attention head combinations within the model, which is crucial for task execution. Experiments have shown that masking strategies can effectively improve task performance, and pre trained models already have task related knowledge that can be triggered by masking. The study proposes a method for detecting functional pathways, revealing the key role of attention head combinations and providing a new perspective for understanding the internal mechanisms of LLMs. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. There is no theoretical claim. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. The code in anonymous github. Relation To Broader Scientific Literature: This article establishes a connection between instruction tuning and pruning of attention heads, which is a very interesting work. This provides a new perspective for us to better understand the essence of instruction tuning and the internal mechanisms of LLM. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** 1. Novel Insight into Model Functionality. The paper revealing how specific combinations of attention heads enable task execution without explicit instructions. 2. Practical Methodology. The proposed attention head masking technique is simple yet effective, requiring no modification to model parameters. This approach can be easily applied across various models and tasks **Weaknesses** 1. Lack of rigorous theoretical analysis. 2. There are no larger scale experiments. This article states that "the Large model shows better head functionality", but the maximum is only the 14B. Other Comments Or Suggestions: No. Questions For Authors: 1. Can you conduct corresponding experiments on a larger scale, such as on 72B. And conduct experiments on the MOE model. I am not sure if the MOE model has similar improvements as the dense model. Because generally speaking, the MOE model may be more inclined towards memory, and the proportion of attention in the MOE model is relatively small. 2. Another curious question for me is whether this method can directly stimulate the reasoning ability of the base model. For example, for deepseek v3, masking some of heads can significantly indicate its performance on benchmarks like AIM. If masking attention can stimulate better reasoning ability, I believe this will be a significant contribution. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Many thanks for your appreciation and valuable remarks on our work. We would like to answer and clarify your concerns as below. > W1. Theoretical analysis Since our conclusions are primarily derived from experiments and observations rather than strict theoretical analysis, we sincerely apologize for the shortcomings in this regard. Nevertheless, some studies [1][2] have provided insights that support the validity of our approach, in which we regard attention heads as computational nodes in the information flow and analyze their functionality. [1] Yao, Y., Zhang, N., Xi, Z., Wang, M., Xu, Z., Deng, S., & Chen, H. (2024). Knowledge circuits in pretrained transformers. arXiv preprint arXiv:2405.17969. [2] Hao, Y., Dong, L., Wei, F., & Xu, K. (2021, May). Self-attention attribution: Interpreting information interactions inside transformer. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 14, pp. 12963-12971). > W2. Larger scale experiments In Sec. 3.4, we stated that "Larger model shows better head functionality," primarily based on the criterion of whether *behavior switching* can be achieved. This is because, in models smaller than 3B, masking out certain attention heads does not reliably trigger behavior switching. To further support this claim, we conducted additional experiments on Qwen2.5-32B-Instruct and found that it also exhibits behavior switching in translation tasks (see Q1). > Q1. Large model & MoE model We are very sorry, but we currently do not have the resources to conduct experiments on a model as large as 72B. However, we have performed training on a 32B model (Qwen2.5-32B-Instruct). Due to time constraints, we only trained head masks for en to other six languages translation task. Generally, we found that applying masks can still improve model performance on the 32B model. - PPL ||en-zh|en-fr|en-es|en-de|en-ru|en-ar| |-|-|-|-|-|-|-| |with instruction|2.739|1.325|1.385|1.448|1.587|1.960| |with mask|**0.943**|**0.553**|**0.448**|**0.554**|**0.832**|**1.326**| - ROUGE-L ||en-zh|en-fr|en-es|en-de|en-ru|en-ar| |-|-|-|-|-|-|-| |with instruction|0.470|0.428|0.415|0.445|0.388|**0.293**| |with mask|**0.684**|**0.703**|**0.752**|**0.708**|**0.449**|0.290| For MoE models, we selected Phi-3.5-MoE-instruct (16×3.8B, activating 6.6B) for head mask training on the en-zh translation task. We observed that while head masking in the MoE model could trigger behavior switch, it did not lead to performance improvements. This might be because MoE model parameters are more inclined toward memory. We propose another two possible explanations: **(1)** The primary functionality of MoE might reside in the FFN, with the router playing a role similar to attention head masking for function selection. **(2)** The task-level mask and the token-level router may not work well together. Improving the effectiveness of head masks in MoE models is an important direction for our future research. - PPL ||en-zh|en-fr|en-es|en-de|en-ru|en-ar|zh-en|zh-fr|zh-es|zh-de|zh-ru|zh-ar| |-|-|-|-|-|-|-|-|-|-|-|-|-| |with instruction|1.088|1.046|0.857|1.008|1.286|0.851|2.747|2.160|2.371|2.317|1.825|1.237 |with mask|**0.786**|**0.684**|**0.558**|**0.680**|**0.877**|**0.558**|**1.098**|**1.048**|**1.141**|**1.202**|**1.148**|**0.746**| - ROUGE-L ||en-zh|en-fr|en-es|en-de|en-ru|en-ar|zh-en|zh-fr|zh-es|zh-de|zh-ru|zh-ar| |-|-|-|-|-|-|-|-|-|-|-|-|-| |with instruction|**0.448**|**0.650**|**0.674**|**0.628**|**0.429**|**0.425**|0.519|**0.465**|**0.484**|**0.449**|**0.378**|**0.344**| |with mask|0.400|0.361|0.568|0.481|0.282|0.278|**0.558**|0.314|0.327|0.350|0.231|0.237 > Q2. Stimulate reasoning ability After the release of DeepSeek-R1, we attempted to use R1 to generate high-quality reasoning chains for simple tasks and then trained head masks on these chains in an effort to activate reasoning capabilities in other instruct models. Unfortunately, our results were not as promising as we had hoped. Whether we applied the mask directly or after explicitly prompting the model to "think step by step," the model was still unable to achieve high-quality reasoning solely by selecting specific attention heads. (It may still require parameter modification to stimulate its potential reasoning capabilities, such as LIMO[3]) For the masks we obtained, they tended to discard only a small number of attention heads (less than 10% in a 7B model), and for complex tasks like mathematics, almost all attention heads were retained. This leads us to believe that reasoning is a complex process involving the coordination of multiple functions at different stages of generation. As a result, it may not be feasible to activate reasoning abilities solely through head masking at this level of "task granularity". [3] Ye, Y., Huang, Z., Xiao, Y., Chern, E., Xia, S., & Liu, P. (2025). LIMO: Less is More for Reasoning. arXiv preprint arXiv:2502.03387. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. I understand that some experiments may be difficult to implement due to lack of resources. I maintain my positive score, and recommend the author to include the results of MOE model in the manuscript. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 7BU1, Thank you for your understanding regarding our limited resources. And we will include a discussion of the MoE model results in the next version. In addition, although masking attention heads does not seem to stimulate the model’s reasoning abilities, this phenomenon has still shown effectiveness in other scenarios, such as in multimodal models (e.g. speech model SALMONN, and corresponding speech-related tasks) [[Results🔗]](https://i.imgur.com/rWAUytC.png) and in few-shot ICL scenarios [[Results🔗]](https://i.imgur.com/gvKuCU4.png). You may refer to these links for more details if you are interested. **[Update]** In our recent study, we found that applying a similar 0-1 mask to the experts in the MoE model (Deepseek-MoE-16B) can also enable the model to directly perform tasks without explicit instructions. As our approach involves modifying the expert router, this can be viewed as a form of selective masking at the FFN level and at the token granularity. While further investigation is left for future work, this suggests that the masking idea proposed in our paper can also be extendable to expert FFNs to probe their functionalities.
Summary: This paper proposes a simple yet effective attention head masking method for large language models. Specifically, it trains a head weight which can indicate the importance of heads to the task. After training, we can map the trained head weights to head mask and use it as the final mask for inference. Moreover, this paper finds that there are structured groups of interdependent attention heads for different tasks in LLMs. Claims And Evidence: Yes, the claims made in the submission are supported by clear evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense. Theoretical Claims: There is no theoretical claims. Experimental Designs Or Analyses: Yes, the main experimental and ablation study designs and analysis are valid. Supplementary Material: Yes, I have reviewed the appendix. Relation To Broader Scientific Literature: This work provides a deep and comprehensive analysis for uncovering the functional behaviors of attention heads in LLMs. Unlike prior efforts that relied on pruning or external prompts, this study shows that selectively activating relevant heads can improve performance, offering insights into efficiency and controllability. Essential References Not Discussed: The related works mentioned in this paper are good and enough to understanding the key contributions of the paper. Other Strengths And Weaknesses: **Strengths**: 1. The paper is easy to follow and understand, and the structure is good. 2. The attention head masking strategy proposed in this paper performs well one various tasks. 3. This work proves that LLMs inherently encode structured functional pathways, where specific attention heads are crucial for executing distinct tasks. **Weaknesses**: 1. **Training Overhead**: For different model series and different tasks, we need to train different attention head masks, which introduces non-negligible computational overhead. Moreover, the paper is supposed to clarify the training overhead for head weights. 2. **Lack of Experiments**: For the translation tasks, the 100 samples are selected as evaluation set without any motivation and explanations. The paper is supposed to clarify the reason why they choose 100 samples. Moreover, More evaluations on common down-stream tasks like Hellaswag or ARC-C are suggested to include. 3. **Efficiency Analysis**: The efficiency analysis for inference stage is recommended to include in the paper, such as end-to-end latency, or GPU memory comparison; otherwise, there remain concerns about its actual efficiency in real world usage. Other Comments Or Suggestions: Please see the above sections. Questions For Authors: 1. Have you checked the patterns of attention heads selected for different tasks? Can we find some common patterns from those heads? 2. What will the performance of the proposed attention head masking strategy be on few-shot learning? 3. Could you please discuss and compare the findings in your work with the findings in DuoAttention [1]? 4. Following Q3, will your strategy still be effective for long-context scenarios? 5. How to control the masking ratio among all attention heads? Could you please provide some insights or solutions? [1] Xiao, G., Tang, J., Zuo, J., Guo, J., Yang, S., Tang, H., Fu, Y., & Han, S. (2024). DuoAttention: Efficient long-context LLM inference with retrieval and streaming heads. arXiv. https://arxiv.org/abs/2410.10819 Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions on our work. We would like to answer and clarify your concerns as below. As the text length is constricted, we provide the results in the anonymous github (rebuttal.pdf). First, we would like to clarify that the primary focus of this paper is on the interpretability and functionality of LLMs - specifically, the phenomenon that "tasks can be directly executed by selecting attention heads". We find this phenomenon very interesting, and most of the paper is devoted to validating its generality and attempting to explain its underlying mechanisms. (We have even observed the same phenomenon even in multimodal speech models! See our response to b411) Although using masks can indeed enhance model performance in some cases, we sincerely hope that you will pay more attention to our contribution regarding the interpretability of model functionality. > W1. Training Overhead It is indeed unavoidable to train a mask for each task on every model to identify its functional heads. However, because our training only involves trainable parameters equal to the number of attention heads, the process is both fast and requires little memory (only 44 minutes and 16.5GB of VRAM on llama3.1-8b). We have also compared this approach with full fine-tuning and LoRA fine-tuning in terms of resource consumption. (Note that since fine-tuning modifies the model parameters, these two methods would undoubtedly achieve better performance. Nevertheless, we are not proposing a PEFT method, and we have neither modified nor added to the model’s original parameters.) > W2. Lack of Experiments Initially, we chose to test on 100 samples because we wanted to use more training samples to locate the attention heads required for the translation task more accurately, and the evaluation set was just used to verify whether the model can perform translation. But, indeed, it's our bad to use such a small number of samples for the quantitative evaluation. As it is on the XNLI dataset, we nonetheless evaluated on the full IWSLT2017 evaluation set in the paper (see Table 7 and App. A) without incorporating any additional training data. We would also like to argue that with a smaller train-test ratio, we can still achieve results similar to those reported in the paper. Please refer to the experimental results obtained after re-dividing the dataset with an 8:2 ratio. For other benchmarks: although most benchmark tasks include explicit "instructions" such as options, we have still managed to conduct evaluations on some datasets. We achieved better performance on winogrande and hellaswag. > W3. Efficiency Analysis Taking the llama3.1-8b model and the en-zh translation task as an example, we masked out 192 (18.75%) attention heads, and pruned the corresponding Q-head and $W_O$ matrices. Since the attention heads account for only 16.7% of the model’s total parameters, and the pruning only affects Q and O, we actually only saved 2.5% of the parameters. As some studies focusing on pruning can remove 20-30% of the parameters with only minimal performance loss, we would like to clarify again that our work is not aimed at achieving inference efficiency, but rather at locating and interpreting the model’s modularity. > Q1. Head Pattern In App. C.2 and Figure 7 we have some discussions on the head patterns, and you may check it out. We also have some other findings: **1.** Similar tasks tend to share more attention heads, for example, 42 translation tasks share 177 common attention heads, but 35 simple tasks share only 5 heads. **2.** Due to sparsity, as the model size increases (from Qwen2.5-1.5B to 14B), each attention head tends to be responsible for fewer tasks on average. > Q2. Performance on few-shot learning We conducted some supplementary experiments on simple tasks; please take a look at the results. Summary: 1. When using the mask obtained from 0-shot training for 5-shot inference, the performance ranking is: instructed 5-shot > mask 0-shot > mask 5-shot > 5-shot. 2. For the tasks in Table 2, using a mask trained with 5-shot and then performing mask 5-shot inference yields better performance than instructed 5-shot. 3. We combined multiple simple task datasets into a hybrid dataset, where each sample may come from different tasks but includes 5-shot examples from its corresponding task. After training the mask on this hybrid dataset, the model was able to enhance its 5-shot context learning performance (including tasks that were not seen during training). For our more response please refer to the anobymous github. Sorry for the inconvience!
Summary: The authors study the case that there are several attention heads if switched off, the models actively can be tuned to perform a specific task without fine-tuning. Claims And Evidence: - Switching off attention heads leads to similar results as prompting (experiments for language translation) - Masks can be learned, similar to fine-tuning Methods And Evaluation Criteria: - Used language translation datasets Theoretical Claims: No theoratical claims Experimental Designs Or Analyses: - Yes, checked the translation experiments and comparison with random selection Supplementary Material: Yes. Additional language translation results. Relation To Broader Scientific Literature: There have been other works like https://arxiv.org/abs/2410.05603, which have studied that models are superpositioning multiple tasks at the same time. Please comment how your findings are different. Essential References Not Discussed: Check broader scientific literature Other Strengths And Weaknesses: - The method is quite simple and easy to implement - It is unclear if there are any practical benefits of this method. - It is unclear, how general performance of the model degrades. Other Comments Or Suggestions: See strengths and weakness section Questions For Authors: - Can authors compare how original performance degrades when removing attention heads - Can authors perform a comparison with activations with Prompting experiments Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank your for your revisions and comments. We would provide our clarifications and responses below. First and foremost, we would like to clarify certain statements in your Summary, Methods and Experimental Designs sections. Our experiments are not limited to language translation tasks; they also include 35 simple tasks (Lines 166-169, left column; Table 2, 4, 6, 9, etc.), which account for nearly half of all our experiments. However, this aspect was not mentioned in the review. Additionally, as a paper focused on the functionality and interpretability of LLMs, we have dedicated substantial portions of the text to exploring and analyzing the mechanism that how model "switches behavior through selective attention heads". While it is true that using head masking can improve performance in certain scenarios, we would greatly appreciate it if you could focus more on the paper’s contributions to model functionality and interpretability. If there are any aspects of the paper that remain unclear, we are always here to provide further clarification and assistance. > W1. Benefits As stated above, our paper primarily contributes to the interpretability of models. We find the phenomenon of "enabling the model to perform tasks without instructions by selecting attention heads" to be very intriguing. We propose an efficient method, applicable to a wide range of tasks, for identifying which attention heads are crucial for the task. This phenomenon is not only observable across various architectures and sizes of LLMs (Section 3), but also applies to multimodal models, including those for speech (see below). These attention heads form interdependent "functional pathways" responsible for executing corresponding functions, thus helping us better understand how model modularity manifests in attention heads (Section 4). Additionally, by selecting these attention heads, the model often achieves a certain level of performance improvement (Section 5). - Speech model: test on SALMONN | Task | no instruction | with instruction | with mask | | -- | -- | -- | -- | | Gender recognition | 0.00 | 97.71 | **97.75** | | Emotion recognition | 0.00 | 69.70 | **72.36** | | ASV | 0.00 | **93.61** | 93.23 | | Audio captioning | 14.90 | 20.89 | **24.33** | | S2TT | 15.14 | **34.48** | 33.82 | > W2/Q1. Degrades After using the mask, the model undergoes a behavioral change and can no longer perform the original task as intended. As a result, the general performance will inevitably decrease. However, since the mask does not modify or add/remove any of the model’s parameters, we can simply stop using the mask to restore the model's original instruction capabilities. This process is straightforward and flexible. We also conducted some mask experiments on general benchmarks. Please see our response to uwMV. > Q2. activation experiments In the paper, we conducted an experiment (Line 370, right column) to investigate the relationship between attention head activation (focus on special tokens) and the mask. Our findings suggest that the correlation between them is not strong. We discuss this phenomenon further in Appendix C.3.2, where we suggest that the "attentive function" of the attention heads does not play a major role in their selection.
null
null
null
null
null
null
null
null
Safety-Polarized and Prioritized Reinforcement Learning
Accept (poster)
Summary: Main findings: - The paper introduces MAXSAFE, a chance-constrained bi-level optimization framework for safe reinforcement learning to achieve hard-constraint satisfaction and near-zero costs in sparse cost setting. Particularly, MAXSAFE minimizes the unsafe probability and maximizes the return under safe policies. Main results: - This paper tests their algorithm on diverse autonomous driving and safe control tasks. The MAXSAFE achieves comparable reward-safety trade-off on both benchmarks. Algorithmic ideas: - The algorithm of this paper contains Q-learning + reachability estimation function (REF) + safety polarization + priority replay experience (PER). In particular, this paper extends REF to state-action REF (SA-REF) and constructs optimal action masks based on polarization function to get rid of the gating operator. Besides, this paper utilizes the TD error of REF for PER to address the sparse cost setting. Claims And Evidence: This paper claims (1) highly consequential cost and (2) sparse cost settings as a challenge. I agree that claim (1) is important, which is equivalent to the generally studied hard-constraint problem. However, I am confused why sparse cost is a challenge. For the reward, we have sparse reward settings because we want to maximize the cumulative reward and sparse reward settings indicate the environment cannot give valid reward signals. As for the cost, sparse cost settings indicate most of the cases are safe, which enlarges the feasible region of safety. This setting actually simplifies the problem in my view. - Full cost -> No feasible region; Dense cost -> small feasible region; Sparse cost -> Large feasible region; No cost -> Full feasible region (common RL). Methods And Evaluation Criteria: 1. Yes, the proposed methods and evaluation criteria make sense for the problem. However, the benchmark is a little old. For autonomous driving, here is a survey about open-source simulators. The highway-env is categorized into “driving policy simulator” in reference [1]. However, lots of newer simulators from 2020 to 2023 are also listed. 2. [1] Li, Y., Yuan, W., Zhang, S., Yan, W., Shen, Q., Wang, C., & Yang, M. (2024). Choose your simulator wisely: A review on open-source simulators for autonomous driving. IEEE Transactions on Intelligent Vehicles. https://ieeexplore.ieee.org/abstract/document/10461065. Theoretical Claims: In Appendix B.1., |max⁡ Q_1-max⁡ Q_2 |≤max |Q_1-Q_2 | is not obvious. I think it is better to give more details or hints. Experimental Designs Or Analyses: The baselines for comparison in experimental designs do not satisfy the soundness/validity. This paper claims on sparse cost and high consequential (safety critical) tasks. For this kind of tasks, lots of hard-constraint methods have been proposed in safety-critical RL. However, most of the baselines (except for RESPPO in Appendix) in this paper are soft-constraint RL methods, whose objective is to satisfy the constraint in expectation rather than hard constraint. I think this paper should add some hard-constraint RL methods (such as [1] and [2]) for comparison. --- [1] Yang, Y., Jiang, Y., Liu, Y., Chen, J., & Li, S. E. (2023). Model-free safe reinforcement learning through neural barrier certificate. IEEE Robotics and Automation Letters, 8(3), 1295-1302. Code link: https://github.com/jjyyxx/srlnbc [2] Zhao, W., He, T., & Liu, C. (2023, June). Probabilistic safeguard for reinforcement learning using safety index guided gaussian process models. In Learning for Dynamics and Control Conference (pp. 783-796). PMLR. Supplementary Material: Yes. Appendix A shows very well why state-agnostic masking threshold is not enough. Appendix C explains all details of the experiments and implementations. Relation To Broader Scientific Literature: 1. This paper introduces CMDP and Action-correction-based Safe RL, which are two types of baselines in experiment. Besides, Action Masking for Safe RL is also introduced as the foundation for their method. 2. Prioritized Experience Replay: This paper introduces PER, which is the foundation for their proposed Safety PER. Essential References Not Discussed: Yes, there are lots of related works that are essential for this paper but are not cited/discussed in this paper. 1. Hamilton-Jacobi reachability and reachability estimation function (REF) in Safe RL. As the most important contribution of this paper, these concepts are not introduced or discussed until they propose SA-REF. For example, some literatures include but not limited to [1], [2], [3], [4], and [5]. It is better to discuss this topic in related work and then explain the concept in Preliminaries. --- [1] Ganai, M., Gao, S., & Herbert, S. (2024). Hamilton-jacobi reachability in reinforcement learning: A survey. IEEE Open Journal of Control Systems. [2] Wang, Y., & Zhu, H. (2024, July). Safe Exploration in Reinforcement Learning by Reachability Analysis over Learned Models. In International Conference on Computer Aided Verification (pp. 232-255). Cham: Springer Nature Switzerland. [3] Zhu, K., Lan, F., Zhao, W., & Zhang, T. (2024). Safe Multi-Agent Reinforcement Learning via Approximate Hamilton-Jacobi Reachability. Journal of Intelligent & Robotic Systems, 111(1), 7. [4] Dong, Y., Zhao, X., Wang, S., & Huang, X. (2024). Reachability Verification Based Reliability Assessment for Deep Reinforcement Learning Controlled Robotics and Autonomous Systems. IEEE Robotics and Automation Letters, 9(4), 3299-3306. [5] Zheng, Y., Li, J., Yu, D., Yang, Y., Li, S. E., Zhan, X., & Liu, J. (2024). Safe offline reinforcement learning with feasibility-guided diffusion model. arXiv preprint arXiv:2401.10700. 2. Hard-constraint RL: By now, many researches have studied hard-constraint RL (safety-critical RL; near-zero constraint; persistently safe) --- [1] Yang, Y., Jiang, Y., Liu, Y., Chen, J., & Li, S. E. (2023). Model-free safe reinforcement learning through neural barrier certificate. IEEE Robotics and Automation Letters, 8(3), 1295-1302. [2] Zhao, W., He, T., Li, F., & Liu, C. (2024). Implicit Safe Set Algorithm for Provably Safe Reinforcement Learning. arXiv preprint arXiv:2405.02754. [3] Suttle, W., Sharma, V. K., Kosaraju, K. C., Seetharaman, S., Liu, J., Gupta, V., & Sadler, B. M. (2024, April). Sampling-based safe reinforcement learning for nonlinear dynamical systems. In International Conference on Artificial Intelligence and Statistics (pp. 4420-4428). PMLR. [4] Zhao, W., He, T., & Liu, C. (2023, June). Probabilistic safeguard for reinforcement learning using safety index guided gaussian process models. In Learning for Dynamics and Control Conference (pp. 783-796). PMLR. [5] Wei, H., Liu, X., & Ying, L. (2024, March). Safe reinforcement learning with instantaneous constraints: the role of aggressive exploration. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 19, pp. 21708-21716). [6] Gu, S., Sel, B., Ding, Y., Wang, L., Lin, Q., Knoll, A., & Jin, M. (2025). Safe and balanced: A framework for constrained multi-objective reinforcement learning. IEEE Transactions on Pattern Analysis and Machine Intelligence. [7] Tan, D. C., McCarthy, R., Acero, F., Delfaki, A. M., Li, Z., & Kanoulas, D. (2024, August). Safe Value Functions: Learned Critics as Hard Safety Constraints. In 2024 IEEE 20th International Conference on Automation Science and Engineering (CASE) (pp. 2441-2448). IEEE. Other Strengths And Weaknesses: 1. Strengths - It extends REF to state-action based, which is more detailed for reinforcement learning. - The Theoretical proofs are in detail. - The appendix is very detailed. 2. Weaknesses - Although Theoretical proofs are in detail, there is a lack of explanation of the derivation process. - The sparse cost settings are not convincing. Sparse cost settings indicate that most of the cases are safe, which enlarges the feasible region of safety. This setting actually simplifies the problem. Other Comments Or Suggestions: 1. Some key terms are not explained, at least not clearly, such as “maximal safety”, “safety polarization”, “REF” . 2. Better to add a section about “Hamilton-Jacobi reachability and reachability estimation function (REF) in Safe RL” in related work. 3. Add a section about hard-constraint safe RL in related work. 4. Explain the formulation of reachability and REF in Preliminaries 5. The space in line 101-102 at Page 2 is incorrect. Questions For Authors: Q1: How does the “off-policy” affect the theoretical analysis since the analysis procedure follows just one policy π but off-policy RL has different behavior and target policies? (RESPPO is an on-policy algorithm, so it has no such problem.) Q2: What is the purpose of introducing the gating operator in Section 5.1. and then getting rid of it in Equation (17)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewer for the thorough reading of our paper. We address your valuable questions in the following responses. Q1: Sparse cost challenge In our setup, episodes terminate immediately upon safety violations, as we treat safety as a hard constraint. The learner receives no cost feedback until a violation occurs, resulting in at most one non-zero cost per episode—what we refer to as sparse safety cost. This sparsity makes it difficult to learn safety estimations, as the agent must infer early indicators of unsafe outcomes without intermediate cost signals. These challenges motivate our use of PER, which prioritizes unsafe transitions to improve policy safety. Q2: Question about proving $\gamma$-contraction A brief proof here: https://anonymous.4open.science/r/rebuttal-6740/proof_details.png. Q3: Two more hard-constraint baselines We conduct experiments with PPOBarrier on highway benchmarks and find it performs conservatively, with consistently low rewards across all tasks. In contrast, our SPOM and SPOM_PER achieve a better tradeoff. Detailed SWU scores, crash rates, and episode rewards are provided in https://anonymous.4open.science/r/rebuttal-6740/ppo_barrier_table.png, along with training curves in https://anonymous.4open.science/r/rebuttal-6740/ppo_barrier_training_curve.png. As for the second referenced work, its setting assumes access to analytical violation functions, while our SA-REF is learned without any domain-specific knowledge. Q4: Literature on HJ reachability and REF We will include all relevant literature on HJ reachability and REF in a new section of our Related Works. Briefly, early works employed HJ reachability value functions to assess state feasibility, relying on known system dynamics and numerical methods. Some studies approximate unknown dynamics using Gaussian Processes (Zhang, W. et al. 2023)or symbolic regression (Wang, Y. et al. 2024). Once reachability is computed, the state space is partitioned into feasible/infeasible regions to guide policy optimization (Zheng, Y. et al. 2024). Other works focus on formal safety verification of DRL systems (Dony, Y. et al. 2024). However, HJ-based value functions are not well-suited for stochastic MDPs during RL training. To address this, REF estimates unsafe probabilities via backward reduction (Ganai, M. et al. 2024). Our work extends REF to the state-action level, enabling state-dependent action masking to reduce safety violations. Q5: Literature on hard-constraint safe RL Thanks for the suggestions. We indeed discuss hard-constrained safe RL in our related works. Based on policy optimization strategies, we categorize prior works into: action correction and action masking. Action correction modifies actions after they are proposed (e.g., via projection), while action masking alters the action distribution directly. The papers mentioned by the reviewer largely fit within this framework and will be incorporated into our Related Works. For instance, Tan,D. et al. 2024 uses value functions as control barrier functions in a shielding setup, aligning with action correction. Yang,Y. et al. 2023 learns certificates that softly penalize unsafe actions, guiding policies toward safer regions—similar in spirit to action masking. Suttle, W. et al. 2024 introduces truncated sampling from state-dependent safe action sets, which we view as a general form of action masking. We will include all the literature in our paper for completeness. Q6: How does the “off-policy” affect the theoretical analysis? We assume that actions are sampled from a behavior policy $\pi_b$ in **Section 4.2**, which is a common assumption in Q-learning to ensure sufficient exploration of the state-action space. Our theoretical analysis follows the classical Q-learning framework: we prove the $\gamma$-contraction property of our operator and demonstrate convergence in tabular MDPs. This directly parallels traditional Q-learning theory. Q7:Gating operator The gating operator in Section 5.1 represents our hard masking rule. In **Theorem 4.4**, we prove its convergence in tabular MDPs. When extending this method to deep Q-learning, we found that using Equation (17) as the objective yields better empirical performance. We report the ablation results in **Section 6.3**, where ‘OAM’ refers to Optimal Action Masking, alongside other polarization function choices. Q8: Key terms •Maximal safety: A policy that achieves the lowest probability of safety violations. •Safety polarization: The use of our polarization function to mask actions with relatively high SA-REF values. •REF: can be regarded as the probability for the current state to reach unsafe regions under the current policy. We hope our responses have addressed your concerns and we welcome any further suggestions to improve the paper. If you find our clarifications satisfactory, we would greatly appreciate it if you could consider increasing your score accordingly. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions. While some concerns have been addressed, the following one remain: Q1: I agree to treat hard constraint safety as “episodes terminate immediately upon safety violations”. Do you want to illustrate that the cost is not sparse in the environment but sparse in the replay buffer due to the immediate termination? Q2: I listed the hard-constraint RL methods because these methods should be selected as baselines since your claim is hard constraint. However, you picked most of the soft-constraint RL methods as baselines. I think this is a fatal issue of this paper, where you study field A but select field B as baselines. Q3: The benchmarks are 5-8 years ago. I think it could be OK because benchmarks may not update rapidly (Although there is a survey of open-source simulators as in my comment). However, the selected baselines were also five years ago (before 2020 except for RESPPO). I am curious why not choose the methods in 2020-2024 for comparison. Q4: Besides, it is confusing that the hard-constraint baselines (such as RESPPO, PPO-barrier) cannot outperform soft-constraint baselines in hard-constraint settings. I am wondering if there is any reasonable justification. --- Reply to Comment 1.1.1: Comment: Thanks for your reply. We are willing to explain the above questions you raise. Q1: Sparse cost challenge If we were to force a distinction between whether the cost signal lies in the environment or in the replay buffer, we would argue that it is sparse in the replay buffer. This means that within the training data used by our algorithm, unsafe transition pairs are significantly less frequent than safe ones, making it difficult to accurately estimate the current safety situation using temporal difference learning. Q2: Baselines about hard-constraint safe-RL First, we clarify that the term hard-constraint in our work primarily refers to **the evaluation setting**, where an episode is immediately terminated upon any safety violation. **At the algorithmic level**, all safe RL methods inherently aim to balance the trade-off between reward and safety. In other words, the use of hard constraints in evaluation **does not invalidate comparisons** with techniques such as reward shaping or PPOLag. In fact, the PPOBarrier baseline could also be considered a variant of the PPOLag method, differing only in the way it learns a barrier certificate. Here, we want to emphasize that the distinction between hard-constraint and soft-constraint methods at **the algorithmic level** is often blurred in practice, making it difficult to draw rigid boundaries when selecting baselines. From our point of view, the key difference lies in how the control policy is optimized safely. We categorize existing approaches into action masking and action correction, as previously clarified in our response. In addition, we have carefully reviewed all the hard-constraint RL literature referenced. Nearly half of these works are grounded in a control-theoretic perspective, where the safety specification—such as $\phi(x)$ mentioned in Zhao, W. et al. 2024 and Zhao, W. et al. 2023 or $C(x)$ mentioned in Suttle, W. et al. 2024—is **manually defined for each individual task**. This is notably different from our setting, where we do not assume access to any domain-specific analytical safety functions. Instead, we **only rely on cost signals obtained from the simulator**, which aligns closely with the standard RL paradigm. Regarding the paper Wei, H. et al. 2024, we argue that its setting falls outside the scope of deep RL. It assumes a linear expression of the Q-function and evaluates on a simple Frozen Lake environment, where both the state and action spaces are finite. The work Tan, D. C. et al. 2024 is indeed equivalent to SafeQ mentioned in our paper. Both approaches use binary costs to learn a safety value function and adopt rejection sampling to enforce constraints—an idea that can be interpreted as a form of action masking. In our work, we demonstrate that simply truncating actions based on a fixed masking threshold across all state-action pairs is insufficient under our MAXSAFE framework, as illustrated by the motivating example in **Appendix A**. Finally, the paper Gu, S. et al. 2024 focuses on managing multiple objectives. We argue that their setting still relies on predefined cost budgets and does not fall under the category of hard-constraint methods, if one insists on making such a distinction. To conclude, we have provided a thorough explanation for why we do not include the aforementioned works as baselines in our study. Nevertheless, we warmly welcome suggestions for additional baselines that are suitable for our problem setting, and we are willing to reproduce their results for a fair comparison. Q3: Why not good as other soft-constraint baseline? PPOBarrier and RESPO have been previously implemented in Safety-Gymnasium. However, their inclusion in our safety-critical benchmarks does not **necessarily imply that they outperform simpler approaches such as reward shaping**. There may be multiple reasons for this. One possible explanation is that **directly applying a Lagrangian method** may not be an effective way to manage the reward-safety trade-off in our benchmarks. Additionally, the learning of the barrier function in PPOBarrier may be suboptimal **due to the sparse cost setting**. We truly appreciate the reviewer’s constructive feedback. If you find our clarifications satisfactory, we would greatly appreciate it if you could consider increasing your score accordingly.
Summary: The paper proposed MAXSAFE, a safe RL algorithm which aims to maximize the return while minimizing (reduce to zero) the probability of visiting an unsafe state. MAXSAFE is extended based on Q-learning algorithm, and thus is applicable to discrete action MDP problems. The major contribution of this paper is that it outlines the proposed process in masking unsafe actions from the Q-safety critic and provide theoretical guarantee in tabular MDP setting. Claims And Evidence: 1. One assumption is that the policy space is sufficiently large that unsafe probability can be minimized to near zero. Although the paper cited a prior work which also makes this assumption, I'd argue that this is not generally applicable to most safe RL problems where the transition dynamics can be stochastic and hence it might not be feasible to reduce the probability of visiting unsafe states to near zero. 2. The proposed method is limited to discrete action setting but most of the safe RL problems and benchmarks handle continuous action in general. Methods And Evaluation Criteria: 1. The CMDP-based RL selected is based on RCPO (Tessler et al., 2019) which (a) is an on-policy algorithm, (b) dynamically adjusts the safety-reward tradeoff coefficient $\lambda$ based on the specified level of safety. These two features are not present in MAXSAFE, which is an off-policy algorithm and targets to absolutely minimizes the probability of visiting unsafe states. This makes me think that this selection of baseline might not be valid, considering that on-policy algorithm may need more samples to converge and RCPO also needs to learn the optimal tradeoff. Perhaps an SAC-Lagrangian baseline (modified to handle discrete actions) with different tradeoff setting is a better baseline? 2. I have a little bit of doubt whether condensing both the safety and reward performance figure to a single evaluation metric SWU is useful. It seems to overly simplify the tradeoff into one single number without any statistical confidence value. Perhaps using the more conventional way, e.g. report reward and safety separately with both average and stdev, would be better. Theoretical Claims: The theoretical part in the main paper mainly includes the steps to derive the steps to learn the optimal action masks, which has some resemblance to the cited baseline SafeQ (Srinivasan et al., 2020). The steps look sound, do note that I did not carefully check the detailed step-by-step proof in the appendix. Experimental Designs Or Analyses: Most of the safe RL literature uses the safety gym / gymnasium or bullet gym for experiments and the chosen environments in this paper: highway-environments and ACC are a little unfamiliar to me. Perhaps running experiments in these well-tested benchmark domains would be easier for reviewers to assess. Supplementary Material: The supplementary material includes additional explanation, theoretical proofs, implementation details and experiment domain descriptions. Code is not submitted. As mentioned earlier, do note that I did not carefully check the detailed step-by-step proof in the appendix. Relation To Broader Scientific Literature: MAXSAFE is relevant to the safe RL work as it provides an algorithm which can learn a safe policy while optimizing reward, although it has some inherent restrictions which might limit its applicability to general safe RL setting. Essential References Not Discussed: NA Other Strengths And Weaknesses: Using Prioritized Experience Replay is an interesting strength of MAXSAFE as it enhances its capability in handling sparse and catastrophic safety violation setting. Other Comments Or Suggestions: NA Questions For Authors: Please refer to earlier sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewer for the valuable feedback of our work. Q1: The soundness of our assumption that there exists a sufficiently large policy space with minimal unsafe probability. We assume that in the environments we consider, there exists a sufficiently large policy space in which the probability of unsafe events is minimal. Such environments are common in autonomous driving and robotic control. For instance, there are many feasible driving policies that allow a vehicle to operate safely on the road without collisions, as well as a sufficiently large set of policies enabling a robot to navigate without falling or colliding with obstacles—indicating that a large policy space with minimal unsafe probability indeed exists in these environments. This assumption aligns well with many practical scenarios. Building on this assumption, our MAXSAFE policy selects from within this space and aims to optimize reward performance while ensuring safety. Q2: The proposed method is limited to discrete action spaces. Our action masking method is currently only suitable for RL tasks with discrete action spaces, as simple action masking strategies may not perform well in continuous settings. We view this as an interesting direction for future work, which would require more advanced techniques to better optimize the trade-off between safety and performance. Nevertheless, the proposed safety Prioritized Experience Replay (safety PER) is a general approach and can also be applied to the learning of the cost critic. To validate this, we run experiments on two environments—SafetyCarCircle and SafetyPointGoal—from the Omnisafe benchmark, https://github.com/PKU-Alignment/omnisafe. The results suggest that the simple SAC-Lag algorithm can benefit from being combined with safety PER to achieve better performance. We use 6 random seeds for evaluation, and we provide both the training curves and the mean/std of episode return and episode cost. The results are available at:https://anonymous.4open.science/r/per_test/results.png, https://anonymous.4open.science/r/per_test/training_curve.png. Q3: Is a SAC-Lagrangian baseline (modified to handle discrete actions) with different trade-off settings a better baseline? We include a variety of off-policy baseline RL algorithms. Since our tasks involve discrete action spaces, we adopt DQN-based off-policy methods. For example, our baseline RCDQN is an off-policy algorithm, and all algorithms with names ending in “DQN” are off-policy methods designed for discrete action settings. Q4: Usefulness of the SWU score. We adopt the evaluation setting from previous literature to demonstrate the trade-off between safety and reward. In our paper, we also report the training results for episode reward and crash rate in **Table 5**. Q5: Choice of benchmarks Regarding the concern about why we do not use Safety-Gym as our benchmark suite, the primary reason is that our setting fundamentally differs from that of Safety-Gym. In our framework, safety is treated as the highest priority. Specifically, we assume that an episode terminates immediately upon the execution of any unsafe action (i.e., entering an unsafe region). In contrast, Safety-Gym employs a scalar cost signal and allows temporary safety violations as long as the expected cumulative cost remains within a predefined budget. For example, the “circle” environment in Safety-Gym’s safe navigation task is conceptually similar to the “circle” environment in our benchmarks. However, the underlying assumptions differ: in Safety-Gym, safety is treated as a constraint, and episodes continue despite violations, provided the overall cost remains within budget. In contrast, our formulation enforces strict safety—any violation leads to immediate episode termination. This setting better reflects real-world scenarios where safety is non-negotiable and must take precedence over performance. A formal comparison of the two formulations is provided in the **Preliminaries** section of our paper. We refer the reviewer to that section for detailed definitions. Due to our strict safety assumption, the cost signal in our setting only appears at the end of an episode, resulting in significantly sparser cost feedback. To address this challenge and improve learning efficiency, we introduce Prioritized Experience Replay (PER) in our SPOM_PER method. This design is motivated by the need to balance reward maximization with strong safety guarantees. We hope our responses have addressed your concerns and we welcome any further suggestions to improve the paper. If you find our clarifications satisfactory, we would greatly appreciate it if you could consider increasing your score accordingly. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response and it clarified some of the points I raised. I do have some comments remaining: 1. While I noted that Prioritized Experience Replay can be applied to learning of cost critic in continuous action setting, I understand this is not the core part of this paper though and SPOM is still targeted at discrete action domain. That said, I do note that the authors mentioned this is part of interesting future work. 2. I understand that a number of the baselines are DQN based, I specifically mentioned SAC-Lag because it can be applied to discrete action setting while might explore better than DQN (epsilon-greedy) in terms of exploring the safety space, due to the entropy term built in to the objective. Maybe SAC-Lag or other SAC-based baseline would be helpful in this case? 3. In Table 5, it'd be helpful to list the statistical deviation too (e.g. std dev). 4. For safety-gymnasium comparison, I understand that the safety setting is different. However, I think the safety gymnasium env can still be lightly extended and used, where an episode is terminated after incurring a cost. 5. It'd be great for assessing reproducibility if the code is shared. --- Reply to Comment 1.1.1: Comment: Thanks for your reply. We are willing to explain the above questions you raise. Q1: SAC-based baseline We implement the SAC-Lag baseline on four autonomous driving benchmarks. Since our task involves a discrete action space, we follow the setup of Soft Actor-Critic for Discrete Action Settings (https://arxiv.org/abs/1910.07207) to compute the TD target and policy loss. We employ a double Q-network for the reward critic and a single network for the cost critic, and apply automatic $\alpha$ tuning, as is standard in SAC. To align with our hard-constraint cost violation setting, we set the cost limit to zero, using it as the Lagrangian penalty term in the actor loss, consistent with the PPO-Lag baseline. The results (episode rewards, crash rates and SWU scores) are available at the following link: https://anonymous.4open.science/r/per_test/SACLag.png. The results show that our SPOM and SPOM_PER achieve higher SWU scores than the SAC-Lag baseline. The entropy-based exploration strategy adopted by the SAC-Lag baseline leads to conservative behavior across our benchmarks, which means it is suboptimal and not better than epsilon-greedy exploaration policy in our hard-constraint sparse cost setting. However, we believe that identifying the optimal exploration strategy under sparse cost conditions is orthogonal to our main contribution and represents an interesting direction for future research. Q2: Add standard deviation results Thank you for your valuable reminder. We also believe that including standard deviations provides a more rigorous and comprehensive reflection of the experimental results. All standard deviation results have now been included in the table, available at the following link: https://anonymous.4open.science/r/per_test/mean_std_results.png. **Table 5** will be updated accordingly in the next version of the paper. Q3: Safety-Gymnasium Thank you for your suggestion. We do not initially consider directly using Safety-Gymnasium, as our evaluation setting involves hard constraints, and many tasks in Safety-Gymnasium are designed for continuous action spaces. Given the current time constraints, adopting Safety-Gymnasium requires us to not only rewrite the benchmark interfaces but also re-implement all of our existing DQN-based baselines for the new environments, which demands substantial time and effort. Nevertheless, we find the reviewer’s suggestion very insightful and consider re-implementing our methods to be compatible with Safety-Gymnasium as a direction for future work. Q4: Code availability Yes, we would be happy to share our code upon acceptance of the paper. We truly appreciate the reviewer’s constructive feedback. If you find our clarifications satisfactory, we would greatly appreciate it if you could consider increasing your score accordingly.
Summary: The paper has a clear motivation to improve the safe RL through action masking. To avoid directly applying infinity to Q, the paper introduces the polarization functions. To improve the learning of REF, the paper uses prioritized learning. The result shows significant improvement in both reward and safetyness compared to the baselines. ## update after rebuttal It seems that the authors have addressed most of the other reviewers' concerns well. I am happy to raise my score to 5. Claims And Evidence: Yes, the claim made by the submission is supported clearly. There are clear improvements in the experiments. Methods And Evaluation Criteria: The proposed method is on the right direction to tackle the key challenge of safe RL - balance between 2 objectives. The evaluation criteria seems correct to me. Hopefully, the author can add Safety RL Gym as an additional benchmark, but I am totally okay without it. Theoretical Claims: I checked the covergence proof and there is no issue. Experimental Designs Or Analyses: The proposed experiment seems a new experiment setting. I double check and the settings look good overall, including the metrics and the reward design. Supplementary Material: Appendix A looks great in terms of why the threshold masking does not work. Appendix B is very detailed. Appendix C includes ppo-lag and respo, and explains their failure due to the high variance. Relation To Broader Scientific Literature: The key contribution of the paper is to introduce the polarization function, which I feel is the correct way of learning action masked policy. Typically, people do a mixture of 2 modes - optimizing an objective and minimizing the cost - by adding them together as 1 objective. These approaches often suffer from sensitive Lagrangian hyperparameter or other issue to balance between the 2 modes. The paper made crystal clear improvement by directly injecting the safe signal into the learning of Q function, which I feel to be the correct way to ensure safetyness inherently for learning a policy. The introduction of polarization function also feels novel and tackles the right key challenge part to learn the correct Q function. Essential References Not Discussed: NA Other Strengths And Weaknesses: The paper is written very well. It is overall very easy to read, easy to grasp the key idea of the author. Other Comments Or Suggestions: I wish the impact statement can have more text. Since this is about Safe RL, any improvement may have a positive impact to the society if the algorithm will be deployed at scale in real world. Questions For Authors: 1. Is there any particular reason why safety gym is not included as a benchmark? Or why do the authors choose to run the methods on a new benchmark? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewer for the positive recognition of our work. Regarding the concern about why we did not choose Safety-Gym as our benchmark suite, the primary reason is that our setting fundamentally differs from that of Safety-Gym. In our framework, safety is treated as the top priority. Accordingly, we assume that an episode terminates immediately upon the execution of any unsafe action (i.e., entering into any unsafe region). In contrast, Safety-Gym employs a scalar cost signal and allows safety violations as long as the expected cost-to-go remains within a predefined budget. For example, the “circle” environment in Safety-Gym’s safe navigation task is conceptually similar to the “circle” environment in our benchmarks. However, the underlying assumptions are different: in Safety-Gym, safety is treated as a constraint, and episodes continue even when temporary violations occur, as long as the total cost remains within the safety budget. In contrast, our formulation assumes strict safety—any safety violation immediately terminates the episode. This setting better reflects scenarios where safety is non-negotiable and takes precedence over performance. The mathematical comparison between these two setups is provided in the Preliminaries section of our paper, and we refer the reviewer to that part for detailed formulation. As a result of our strict safety assumption, the cost signal in our setting only appears at the end of an episode, making it significantly more sparse. To address this challenge and improve learning efficiency under such sparse cost feedback, we propose the use of Prioritized Experience Replay (PER) in our SPOM_PER method. This design decision is also motivated by the need to balance reward maximization with strong safety guarantees. We truly appreciate the reviewer’s positive feedback and encouragement. We hope our responses have addressed your concerns, and we welcome any further suggestions to improve the paper. --- Rebuttal Comment 1.1: Comment: Thank you so much for the explanation. I understand now that the new benchmark is better to reflect the assumption of immediate termination for unsafe state. I remain my recommendation for acceptance.
Summary: This work introduce a chance-constrained bi-level optimization framework called MaxSafe, for the maximal-safety RL problem. MaxSafe first minimizes the unsafe probability and then maximizes the return among the safest policies. Claims And Evidence: 1) The authors assume that there is sufficiently large policy space with minimum unsafe probability. And justify their assumption by saying that it has already been explored in the literature (e.g., Ganai et al., 2023). However I'd say that this assumption depends very much on the environment at hand. What class of environments would justify such an assumption? Methods And Evaluation Criteria: Yes, the experimental evaluation is rather thorough, with several relevant RL environments considered, and different related approaches compared. Theoretical Claims: The theoretical results appears to be sound. Proofs are provided in the appendix, but I did not check them. The authors claim that "Extensive experiments demonstrate that our method achieves an optimal trade-off between reward and safety, delivering near-maximal safety." Is this meant in terms of the SWU metrics introduced in Sec. 6? Experimental Designs Or Analyses: The authors present experiments on autonomous driving and safe control tasks, demonstrating that the proposed algorithms, SPOM and SPOM PER, achieve the best reward-safety trade-off (as computed by safety-weighted-utility) among state-of-the-art safe RL methods. Supplementary Material: No, I did not. Relation To Broader Scientific Literature: The state of the art is discussed in depth in the related work section. In particular the authors refer to (Ganai et al., 2023) and their reachability estimation function (REF) to capture the probability of constraint violation at state s under a given policy. In this work, the authors extend the definition of REF to be state-action dependent. Essential References Not Discussed: Nothing relevant not discussed to my knowledge. Other Strengths And Weaknesses: 1) The paper makes both theoretical (Sec. 4) and practical (Sec. 5) contributions. In Sec. 4 the authors define a methodology to obtain optimal policies with provably minimal safety violations. Then, in Sec. 5 they adapt the methodology to an implementation with deep RL, by using polarization functions and safety prioritized experience replay. 2) As a weakness, the experimental evaluation does not appear to be conclusive, this might be due in part to the fact that the trade-off between safety and performance is elusive. As it appears from Fig. 1, the methods here proposed, SPOM_PER and SPOM, not always guarantee the lowest crash rate. 3) Also, related to the point made above, it is not clear how much justified is the assumption that there is sufficiently large policy space with minimum unsafe probability. Consider a stochastic exploration task, where there is a non-zero probability of ending up in an unsafe state. It might be that no policy is zero unsafe, as there is always some non-zero chance of ending up in a non-safe state. It looks like in such case the method described in the paper would fail. Other Comments Or Suggestions: Nope Questions For Authors: Could the authors answer points (2) and (3) in Strengths And Weaknesses? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewer for the valuable feedback and constructive comments. To better quantify the trade-off between reward and safety, we adopt the SWU score introduced by Yu, H., Xu, W., and Zhang, H. in Towards Safe Reinforcement Learning with a Safety Editor Policy (NeurIPS 2022b) as our final evaluation metric to compare the performance of different algorithms. The following sections address the two main questions raised by the reviewer. Q1: The soundness of our assumption that there exists a sufficiently large policy space with minimal unsafe probability. We assume that in the environments we consider, there exists a sufficiently large policy space where the probability of unsafe events is minimal. Such environments are common in autonomous driving and robotic control. For instance, there are many possible driving policies that allow a vehicle to operate safely on the road without collisions, and a sufficiently large set of policies that enable a robot to navigate without falling or colliding with obstacles—indicating that a large policy space with minimal unsafe probability indeed exists in there environments. This assumption aligns well with many practical scenarios. Building on this assumption, our MAXSAFE policy selects policies from this space and aims to optimize reward performance while ensuring safety. Q2: The proposed methods SPOM_PER and SPOM do not always guarantee the lowest crash rate. In our experiments, we use the SWU score to evaluate the trade-off between safety and reward. As shown in Table 1 of our paper, SPOM_PER and SPOM achieve the best overall performance across all six benchmark environments. Although the Recovery baseline achieves the lowest crash rates in Roundabout and Intersection, its reward performance is significantly worse, indicating that it suffers from an overly conservative policy in these two environments. In contrast, our SPOM_PER and SPOM attain the highest SWU scores across all benchmarks. Also, the results demonstrate that applying Prioritized Experience Replay (PER) to our proposed SA-REF effectively reduces safety violations while preserving strong reward performance. We sincerely appreciate the reviewer’s constructive feedback, which has been invaluable in improving the clarity and rigor of our work. We hope our detailed responses have addressed your concerns, and we welcome any further suggestions to strengthen our paper.
null
null
null
null
null
null
Sparse Autoencoders, Again?
Accept (poster)
Summary: The paper proposes sparse variational autoencoders as an analog to sparse autoencoders by adding a sample wise sparsity mask. The authors then consider low dimentional data and show that only this low dimensions are active for optimal parameters. They then compare this model on multiple real world data sets to validate that it can outperform other sparse autoencoder benchmarks. Claims And Evidence: The claims are supported by evidence however I have some concerns about the theory part, see theoretical claims below. After the rebuttal the concerns have been addressed Methods And Evaluation Criteria: The benchmarks seems sensible, it should be noted that the only comparison made is against other sparse autoencoder models. Theoretical Claims: I went through the proofs in appendix F, they seemed correct, comments are in questions. Experimental Designs Or Analyses: I did not check the experiments in detail. Supplementary Material: Went through the proofs in appendix F, skimmed the rest of the appendix. Relation To Broader Scientific Literature: The results further the understanding of sparse autoencoders by considering variational autoencoders. It can also be of more general interest to representation and interpretability as the learned structure is quite explicit. Essential References Not Discussed: None Other Strengths And Weaknesses: (+)The idea for this paper is well motivated and the paper is well structured (+) The novel ideas in this paper warrant exploration and I believe this to be a good direction for research. (-) The overall contribution in this paper feels a bit shallow in the sense that the theoretical claims are quite surface level and the experiments are still on simpler (if real world) data and compared against other sparse autoencoder models only. There is not one particular thing wrong, but it seems to lack a strong point that really has an impact. I would be happy to have my mind changed on this by the authors rebuttal (-) The theoretical contribution is not very clear both the claims themselves as well as the proofs (see questions for below) Other Comments Or Suggestions: See questions below Questions For Authors: - I am a bit confused about definition 4.1. Manifolds of dimension $r$ that are diffeomorphic to $\mathbb{R}^r$, are in particular diffeomorphic to an open ball in $\mathbb{R}^r$ (as $\mathbb{R}^r$ is diffeomorphic to a ball). This seems very restrictive, as it means that up to differomorphic equivalence only open balls are considered. In particular this excludes any compact manifold (e.g. sphere) as well as manifolds with boundary. From an applied perspective this restriction seems strange, could the authors elaborate on why it is needed and whether it can be relaxed? - Regarding definition 4.3, why does $\sigma_z(x;\phi_\gamma) = \Theta(1)$ correspond to inactive dimensions, in principle we could still have $1 - \sigma_z(x;\phi_\gamma) = \Theta(1)$. Why do we not need that $ \sigma_z(x;\phi_\gamma) \approx 1$? - The notation is not quite clear: in (4) $\sigma_z[x;\phi]$ is defined, is this different from $\sigma_z(x;\phi) defined in (2) ? For example in Definition 4.3 the () brackets are used. - For theorem 4.5 you need $\sum_i r_i \leq \kappa$. Do you have any intuition what happens if this is not the case? - Why is 4.6 called a Corollary, how does it follow from Theorem 4.5 (it seemed that rather than being deduced it was proven in a similar manner) - The first paragraph in F.1 states that you first analyze and then construct a solution, but afterwards you seem to construct a solution and then analyze. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the comprehensive review of our work, particularly with respect to checking proof details and pointing out multiple valuable corrections. **Comment:** *The benchmarks seems sensible ... but the only comparison made is against other sparse autoencoder models.* **Response:** Actually, we also compare against *diffusion* in Section 5.3 which is definitively not an SAE. Additionally, we compare extensively against vanilla VAEs, which have not historically been considered as SAEs per se. Still, the predominant focus of our work is indeed on improving upon SAE models. **Comment:** *I did not check the experiments in detail ... Went through the proofs in appendix F...* **Response:** Thanks for checking the proofs, *this is a great help*. The diversity and complementary nature of our experiments also serve as a key part of our contribution. **Comment:** *(+)The idea for this paper is well motivated and the paper is well structured (+) The novel ideas in this paper warrant exploration and I believe this to be a good direction for research.* **Response:** We appreciate acknowledgement of the novelty of our work, and indeed no prior paper has repurposed and analyzed VAE models in the way that we have. **Comment:** *(-) The overall contribution in this paper feels a bit shallow ... comparisons against other sparse autoencoder models only ...* **Response:** We emphasize that VAEs represent one of the highest cited, influential ML frameworks of the last 10 years, and yet they have never been previously applied or analyzed in the context of SAE tasks as we have done. Nor have the specific attributes of adaptive sparsity, local minima smoothing, and guaranteed maximally sparse global optima been cohesively addressed in the context of widely-used SAE models. Still, as a contribution to an ML conference we do not match the mathematical depth or generality of a pure theoretical contribution on manifold geometry, etc. Instead, we produce a brand new VAE use case, *that requires no additional model complexity*, and elucidate unique attributes shared by no prior approaches. We also offer extensive supporting experimentation, including comparisons with diffusion models beyond SAEs alone. **Comment:** *(-) The theoretical contributions need clarification ...* **Response:** Please see our specific responses elsewhere. However, broadly speaking, our results establish that the proposed VAEase energy function uniquely and simultaneously achieves two desirable properties: (i) The global minima exactly reflect union-of-manifold structure in data, and (ii) the loss exhibits a selective smoothing mechanism capable of reducing local minimizers. *No prior SAE or VAE approach achieves both*; provably so in the sense described in Section 4 for the wide classes we consider as baselines. **Comment:** *Question about Definition 4.1 and and whether stated conditions can be relaxed, etc.* **Response:** Great suggestion. The issue here relates to an inadvertent typo in our submission, where we used $\mapsto$ instead of $\rightarrow$ in Definition 4.1. Crucially, we do *not* require the image set in $\mathbb{R}^r$ to be an open ball. This implies: (i) the manifold's image can be an arbitrary topology as long as a global diffeomorphism exists; (ii) manifolds with a boundary remain valid objects of study as the definition focuses on the dimensionality of image set rather than imposing openness constraints. **Comment:** *Regarding Definition 4.3, do we really need $\sigma_z(x;\phi_\gamma) = \Theta (1)$?* **Response:** In the proof from Section F.1 (Line 1003), the correct asymptotic requirement should be $\sigma_z^2(x;\phi_\gamma) = O(\gamma)$ for *active dims* or $1 - \sigma_z^2(x;\phi_\gamma) = O(\gamma)$ for *inactive dims* to eliminate constants in the inequality. We will clarify the revision accordingly; thanks for pointing out this oversight in our original draft. **Comment:** *The notation is not quite clear ... specifically brackets vs parentheses in equations (2) and (4).* **Response:** We adopt the convention (shared by some ML works) that parentheses and brackets are interchangeable, and are chosen purely for making nested cases more readable, i.e., to avoid multiple parentheses or brackets in a row that can be visually difficult to parse. But this is obviously a minor stylistic choice that can be changed if the reviewer prefers. **Comment:** *Why is 4.6 called a Corollary ...* **Response:** Corollary 4.6 involves analysis closely related to Theorem 4.5, and the final conclusions are complementary. Still, we can easily change this designation if preferable. **Comment:** *The first paragraph in F.1 is not clear, etc...* **Response:** Great catch. Indeed, the original text was backwards; we will correct it to state that we first construct a feasible solution and then analyze to form a necessary condition for optimality. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my concerns I have updated my review accordingly. --- Reply to Comment 1.1.1: Comment: Thanks again for evaluating technical aspects of our paper in particular and for engaging with our rebuttal.
Summary: After a discussion of the strengths and weaknesses of variational autoencoders (VAEs) and sparse autoencoder (SAEs), this paper proposes an adaptation of the VAE architecture to enhance sparsity and target the interpretability objective that motivates SAEs. The idea is conceptually simple: use the learned variance network $\sigma_z$ as a mask to select the active latent dimensions of z by element-wise multiplying the latent representations z by 1 - $\sigma_z$. The authors theoretically show that this adjustment allows to benefit from the local minima smoothing properties of VAEs while mimicking the behavior of SAEs. Then, the authors present empirical evidence supporting the effectiveness of the proposed VAEase architecture. In particular, they compare it to a vanilla VAE, and SAEs in which sparsity is enforced by L1 regularization, log and top-k activations. The methods are evaluated in a toy setting in which data are generated in a small number of low-dimensional manifolds and then embedded into a space with larger dimensionality. Additionally, the auto-encoding methods are evaluated on the reconstruction of image data (MNIST and Fashion-MNIST) and of the activations of a Pythia language model. Finally, the paper presents a comparison of VAEase to diffusion models on the reconstruction of pseudo-MNIST data generated using a GAN. Claims And Evidence: The authors claim that the VAEase architecture is able to achieve a better sparsity-reconstruction trade-off than SAEs and vanilla VAEs. This claim is adequately supported by theoretical and empirical results. Methods And Evaluation Criteria: The experimental setup in both the toy setting and on image and language model data is reasonable. The evaluation metrics for sparsity and reconstruction quality are appropriate. Theoretical Claims: I did not find any issues in the proofs of the theoretical claims. However, it’s possible that I might have missed something. Experimental Designs Or Analyses: The experimental analyses in the toy setting and on image and language model data are sound. However, given that one of the main applications of SAEs is in interpreting language model activations, additional empirical validation of language model data (e.g., on a more recent, larger model) would make the paper stronger. Supplementary Material: I briefly reviewed the provided code. Relation To Broader Scientific Literature: The contributions of the paper connect with the literature about VAEs and with recent works proposing the application of SAEs to interpret language model activations. This is adequately discussed by the authors in Section 2. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: - What does the reconstruction error/sparsity trade-off for language model data look like? A visualization similar to Figure 3 (possibly in the appendix) would be appreciated. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the quite comprehensive and accurate summary of our work. Indeed this description captures the essence of what we hoped to convey. **Comment:** *...additional empirical validation of language model data (e.g., on a more recent, larger model) would make the paper stronger.* **Response:** We agree that further investigation into LLM applications of our work is desirable. However, because we are not domain specialists w.r.t. LLM interpretability in particular (which involves a number of qualitative investigations), we have deferred this application-specific dimension to future work. We also remark that prior SAE papers devoted to LLMs have proposed the same metrics that we adopt herein for quantification purposes. That being said, as our emphasis is on diversity of application, especially very recent ones, we have added an additional LLM-driven experiment whereby the downstream goal is to generate hypotheses regarding relationships between text data (e.g., headlines) and a target variables (e.g., clicks); please see https://arxiv.org/pdf/2502.04382. While we defer details to this prior paper (no affiliation to us), the key optimization task involves learning sparse representations of text embeddings from Yelp reviews. Results are shown in the new table below, where we observe that VAEase again requires the fewest average active dimensions (AAD) even while maintaining the best reconstruction error (RE). This further expands the diversity of scenarios whereby VAEase outperforms existing alternatives. Thanks for the suggestion. | Model | RE | AAD | |-----------|--------|------------| | SAE-$\ell_1$ | 0.2236 | 61.92 | | SAE-$\log$ | 0.2298 | 55.97 | | SAE-$T_k$ | 0.2168 | 30.00 | | VAE | 0.2774 | 90.00 | | VAEase | **0.1869** | **16.74** | **Comment:** *What does the reconstruction error/sparsity trade-off for language model data look like? A visualization similar to Figure 3 (possibly in the appendix) would be appreciated.* **Response:** Good question. While Openreview does not allow us to upload new figures, we have run this experiment and the trend closely matches that from Figure 3. This further solidifies the stable advantage of VAEase across quite distinct domains. And such a figure can easily be added to a revised appendix as the reviewer suggests.
Summary: The paper introduces a method to explicitly model sparsity in variational autoencoders (VAEs) that leverages a simple (parameter-free) transformation of the latents before decoding. The method is introduced by intuitive construction and supported by theoretical arguments and results. Experimental comparisons show the effectiveness of the method in learning sparse representations of data. ## Update after rebuttal After clarifications from the authors regarding my criticism of the active dimension removal experiment and additional experiments provided in response to another reviewer, I updated my score to 4 (Accept) to better represent my evaluation of the paper. Claims And Evidence: The claims are overall reasonable and the evidence provided is mostly convincing. Methods And Evaluation Criteria: The experimental setup is valid and the proposed method is sound for the problem of interest. Theoretical Claims: I checked the correctness of the proof for Theorem 4.5. Despite minor writing/formatting issues and convenient assumptions which I'm not sure about regarding applicability in pratical cenarios (Lipschitz continuity), I have no issues to mention. Experimental Designs Or Analyses: I think the experiments conducted are adequate overall, except for the few points below. As far as I understand, the masking of active dimensions in the experiment illustrated in Fig. 3 follows the partitioning criteria described in lines 367--384 ("throughout this paper" is stated therein). Would it not be fairer, when viewing this against reconstruction error, to mask dimensions in order of least to most disturbing w.r.t. the reconstruction error? In other words, sequentially remove the next active dimension that would perturb the average reconstruction loss the least (maybe different dimensions are picked per model). In lines 420--423: This sentence in the analysis is faulty in my view, since their own method estimates $r \approx 15$ on Pseudo-MNIST. Since the setup upper bounds the ground-truth $r$, a simple (but very laborious) test could follow this setup from 16 dimension down to one dimension, stopping whenever the whole setup degenerates. Supplementary Material: I reviewed most of the supplementary material. It was surprising to me how the quality of the writing is considerably worse in the appendix sections compared to the main paper. If a paper relies so much on supplementary material to substantiate its contributions, this should be addressed. Relation To Broader Scientific Literature: The method introduced enable variational autoencoders to be used effectively as sparse autoencoders. This naturally builds upon both sparse autoencoders and variational autoencoders, the former now experiencing increased relevance due to their use in understanding the "inner" representations used by large language models. Essential References Not Discussed: Not applicable. Other Strengths And Weaknesses: **Strengths** The paper is very well written and organised, which made it very easy to read. The idea pursued is simple, well-motivated, and with a straightforward implementation, leading to a high potential of being used by those interested in sparse autoencoders. **Weaknesses** A worthy mention is perhaps the applicability of sparse autoencoders in general, coupled with the size of the change proposed to VAEs. While properly mathematically substantiated and experimentally verified, the main contribution is still arguably quite straightforward and that could be viewed as its weakness. Other Comments Or Suggestions: Some typos/mistakes: - l. 051: hyperparaemters -> hyperparameters - l. 196: DGMs? Perhaps DMs is meant here? - l. 256: $\mapsto$ should be $\rightarrow$ (`\rightarrow`) - l. 235-236: citation uses parenthesis - l. 318: "large" seems to no belong in the sentence - l. 391: "projection" is a somewhat overloaded term, but has a precise meaning as a linear function. Perhaps use "transformation" or "mapping"? - l. 847: "encoders" is mentioned twice - l. 854: wieghts -> weights - l. 980: isn't $\mathcal{X}'$ meant in the integral? Minor comments: - The title of section C.1 struck me as quite colloquial - F.1: $\mathbf{e}$ is often used as the canonical orthonormal basis vectors, perhaps a better name would be $\mathbf{1}$? - l. 998: Could you indent the expression better to be aligned inside the integral? Questions For Authors: None at this point. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are appreciative of the detailed, constructive comments, and for pointing out the high potential of our work being used by those interested in sparse autoencoders (which is our intended audience). We address main reviewer points in turn below. **Comment:** *As far as I understand, the masking of active dimensions in the experiment illustrated in Fig. 3 follows the partitioning criteria described in lines 367--384 ("throughout this paper" is stated therein). Would it not be fairer, when viewing this against reconstruction error, to mask dimensions in order of least to most disturbing w.r.t. the reconstruction error? In other words, sequentially remove the next active dimension that would perturb the average reconstruction loss the least (maybe different dimensions are picked per model).* **Response:** Actually, the shape of Figure 3 does not appreciably change when enacting the reviewer's suggestion. This is because the magnitude of $\mu_z$ values for SAE models, or $\sigma_z$ values for VAE models, already closely reflect their importance w.r.t. reconstruction error by design. Moreover, using these magnitudes better reflects practical use cases, whereby we would ideally like to assess active dimensions without having to compute $O(\kappa^2)$ separate reconstruction errors. **Comment:** *In lines 420--423: This sentence in the analysis is faulty in my view, since their own method estimates $r \approx 15$ on Pseudo-MNIST. Since the setup upper bounds the ground-truth $r$, a simple (but very laborious) test could follow this setup from 16 dimension down to one dimension, stopping whenever the whole setup degenerates.* **Response:** Our intended point on Lines 420-423 and the surrounding text is the following: By design of the experiment, we know that 16 is a strict upper bound on the ground-truth manifold dimension. Hence any method that produces an estimate above 16 must be incorrect. In particular, both diffusion approaches in Table 4 produce estimates vastly exceeding 16. Meanwhile our VAEase approach produces an estimate below 16 despite no knowledge of the generative process. Hence the VAEase estimate is at least plausible, even while conceding that the true manifold dimension might still be a bit smaller (which would also be even further from the diffusion estimates). Of course intuition suggests it can't be dramatically smaller, as visual inspection of human-written MNIST digits indicates multiple non-trivial degrees of freedom. Overall though, our main objective here is to show the superiority of VAEase relative to diffusion, not exactly pinpoint any exact manifold dimensions associated with MNIST. The reviewer's suggestion would indeed help with the latter, but this is computationally intensive and involves subjectively deciding when GAN samples deviate too far from MNIST. Still, the reviewer's comment is well-noted, and we will adjust the writing to ensure our desired points are more clearly conveyed. **Comment:** *I reviewed most of the supplementary material. It was surprising to me how the quality of the writing is considerably worse in the appendix sections compared to the main paper.* **Response:** Admittedly in the rush to submit, more attention was devoted to polishing the main text, which the reviewer agrees is well written and organized. For the revision we can definitely remedy this imbalance. **Comment:** *A worthy mention is perhaps the applicability of sparse autoencoders in general, coupled with the size of the change proposed to VAEs. While properly mathematically substantiated and experimentally verified, the main contribution is still arguably quite straightforward and that could be viewed as its weakness.* **Response:** With regard to SAEs, we believe their relevance is increasing along multiple fronts. For example, as we discuss in Section 5 and elsewhere, there is rapidly growing usage w.r.t. learning interpretable representations of LLM activation layers. Moreover, other representative SAE examples spanning vision and language are frequently appearing in the literature, e.g., as in https://arxiv.org/pdf/2502.04382, https://arxiv.org/pdf/2502.06755v1, and https://arxiv.org/pdf/2410.06981. And to better reflect this trend, we have added additional related experiments; please see our rebuttal response to Reviewer WZdZ for details. Secondly, although our VAEase proposal is admittedly simple, we view this as a fundamental strength as it is easy for anyone to apply, portending larger downstream impact. Moreover, the underlying novelty is evidenced by 10 years of extensive VAE usage without prior work ever uncovering, let alone rigorously analyzing, the highly-effective modification we have proposed. **Comment:** *Some typos/mistakes ... Minor comments* **Response:** Thanks for pointing these out, it is extremely helpful for improving the revision. We will definitely correct each of them. --- Rebuttal Comment 1.1: Comment: **RE: Removal of active dimensions based on reconstruction error** I see. This is overall great, but I have one more comment: > Moreover, using these magnitudes better reflects practical use cases, whereby we would ideally like to assess active dimensions without having to compute separate $O(\kappa^2)$ reconstruction errors. I disagree that this is an argument in favour of the evaluation scheme (or against, for that matter). I believe the reasoning should be to effectively evaluate your method. Whether this is useful for practice later is a separate (but also important) concern. **RE: Lines 420--432** Thanks for the clarification! **RE: Weakness** I agree and appreciate the additional experiment. I am increasing my score after a second look into the authors' rebuttal and additional experiments. --- Reply to Comment 1.1.1: Comment: Thanks for closely considering our rebuttal points and providing further constructive feedback.
Summary: This paper addresses the limitations of traditional Sparse Autoencoders (SAEs) and Variational Autoencoders (VAEs) in sparse representation learning, particularly their inability to adaptively adjust sparsity patterns and sensitivity to hyperparameters. The authors propose a novel model called VAEase, which combines the strengths of both SAEs and VAEs by introducing a new mechanism that dynamically adjusts sparsity based on input samples without requiring hyperparameter tuning. VAEase achieves this by modifying the VAE framework to use encoder variance as a gating mechanism, allowing it to selectively activate or deactivate latent dimensions for each input. Empirical evaluations on synthetic and real-world datasets demonstrate that VAEase outperforms existing SAE and VAE models in accurately estimating underlying manifold dimensions and producing sparser representations while maintaining low reconstruction error. Overall, VAEase provides a more flexible and efficient approach to sparse autoencoding tasks. Claims And Evidence: There is clear and convincing evidence of the claims made in the submission. Methods And Evaluation Criteria: The proposed method make sense for the problem, but I do not find the advantages compared with traditional VAE. The benchmark dataset and evaluation metrics in this paper are not popular in the field of autoencoder research. Theoretical Claims: The proofs in the article are very detailed. I didn’t fully understand them, but I think they are valid and correct. Experimental Designs Or Analyses: The benchmark dataset and evaluation metrics in this paper are not popular in the field of autoencoder research. Supplementary Material: Yes, all the parts. Relation To Broader Scientific Literature: None Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. The proposed VAEase model introduces a novel yet straightforward modification to the traditional VAE framework by leveraging encoder variance as a gating mechanism. This simple change effectively enables adaptive sparsity without complicating the overall model architecture. 2. The paper provides comprehensive theoretical proofs that support the effectiveness of VAEase. The authors rigorously demonstrate that VAEase can recover underlying manifold structures and prove that it has fewer local minima compared to traditional SAEs, thereby solidifying the theoretical foundation of the proposed method. 3. The experiments are well-designed and demonstrate clear improvements of VAEase over existing methods. The model consistently outperforms SAEs, VAEs, and diffusion models across various synthetic and real-world datasets, validating its ability to achieve sparser representations with lower reconstruction errors. 4. The authors test VAEase on diverse datasets, including linear and nonlinear synthetic data, image datasets (MNIST and FashionMNIST), and high-dimensional language model activations. The consistent performance across these datasets highlights the robustness and generalizability of the proposed model. 5. The paper is well-written, with clear explanations of the model design, theoretical analyses, and experimental results. The authors effectively communicate complex ideas in an accessible manner, making it easy for readers to understand the innovations and contributions of their work. Weaknesses: 1. The experiments primarily focus on synthetic datasets, MNIST, FashionMNIST, and intermediate activations of language models. While these datasets are useful for initial validation, they lack the complexity and diversity of real-world image datasets like ImageNet or COCO. Testing on such mainstream datasets would provide stronger evidence of the model's practical applicability and robustness. 2. Although the paper claims improvements over traditional VAEs, the reported gains might not be substantial enough to justify the additional complexity introduced by the VAEase model. In some cases, the differences in performance metrics (e.g., reconstruction error, sparsity) may appear marginal, raising questions about the model's overall advantage. 3. The paper emphasizes the theoretical and experimental aspects of VAEase but lacks a clear discussion on its practical applications. The benefits of adaptive sparsity and reduced reconstruction error are not directly linked to specific use cases, making it difficult to assess the model's real-world impact and potential adoption in industry or other research areas. 4. While the paper provides extensive theoretical proofs, the innovation in the model design might not be immediately apparent to readers. The authors could benefit from a more concise summary of the key innovations and their implications, rather than focusing solely on detailed proofs. This would help highlight the model's novelty and contributions more clearly. Other Comments Or Suggestions: None Questions For Authors: Please refer to the above weaknesses. ## update after rebuttal Based on the author's responses and the comments from other reviewers, I still maintain my score of weak accept and lean towards accepting the paper. Although I am not an expert of sparse autoencoder, I still believe this work is very meaningful. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for acknowledging the many positive aspects of our work, including the novel design, comprehensive proofs, solid theoretical foundation, well-designed experiments, and the robustness and generalizability of our proposed model. We also appreciate the reviewer's statement that there is clear and convincing evidence of the claims made in our submission. **Comment:** *The proposed method make sense for the problem, but I do not find the advantages compared with traditional VAE.* **Response:** A traditional VAE model cannot readily learn adaptive sparse solutions (meaning solutions whereby the locations of informative latent dimensions vary from sample to sample). In fact, we rigorously prove this VAE limitation via Corollary 4.6, and verify it empirically; see for example Table 2, where the VAE is incapable of learning the correct manifold structure *which requires adaptive sparsity to do so*. **Comment:** *The benchmark dataset and evaluation metrics in this paper are not popular in the field of autoencoder research.* **Response:** SAE models are commonly applied to a wide variety of different tasks, such that we are not aware that there is any particular most popular benchmark. That being said, arguably the most high-profile recent application is to the analysis of LLM activation layers, and we have included such experimentation in our paper. We have also added another new application example to the rebuttal; please see our response to Reviewer WZdZ. Moreover, the metrics we adopt closely mirror prior work as well. **Comment:** *1. ... Testing on more diverse image datasets like ImageNet or COCO.* **Response:** While we agree further testing can always be valuable, our focus is not specifically on computer vision tasks, which is why our submission also includes multiple non-image examples. Moreover, we do not commonly see SAE models applied to ImageNet or COCO. Still, application to broader image classes is a useful suggestion for future work. **Comment:** *2. Although the paper claims improvements over traditional VAEs, reported gains might not be substantial enough to justify the additional complexity introduced by the VAEase model ...* **Response:** We respectfully disagree with this point for multiple key reasons. First and most importantly, *our proposed VAEase approach does not introduce additional complexity over a regular VAE*. Instead, it merely reuses the exact same encoder variances as a novel gating mechanism; hence no further justifications are needed with regard to complexity. And secondly, for a fixed reconstruction error, VAEase does in fact produce much greater sparsity across a wide range of tasks as desired. Of course the reported reconstruction errors are often similar, but this is specifically part of the experimental design to facilitate meaningful sparsity comparisons. Along these lines, we also note that under listed strengths, the reviewer specifically mentioned that our experiments are "well-designed and demonstrate clear improvements of VAEase over existing methods." **Comment:** *3. The paper emphasizes the theoretical and experimental aspects of VAEase but lacks a clear discussion on its practical applications ... benefits of adaptive sparsity and reduced reconstruction error are not directly linked to specific use cases.* **Response:** Given the vast literature involving SAE models, we largely defer treatment of particular practical applications to prior work. That being said, we do link the benefits of adaptive sparsity and reduced reconstruction errors to the specific use case of LLM activation interpretability. In particular, to complement more subjective/qualitative analysis of SAE-learned representations, we follow common practice in the LLM literature that advocates specifically for comparing sparsity levels at a shared reconstruction error, with greater sparsity (meaning adaptive sparsity from sample to sample as our VAEase achieves) being associated with better interpretability. Even so, we are happy to include further references to this effect (although space in our submission is tight, so there is quite limited room for much additional discussion). **Comment:** *4. ...The authors could benefit from a more concise summary of the key innovations and their implications, rather than focusing solely on detailed proofs.* **Response:** As an important clarification, our work does not focus solely on detailed proofs. In fact, all proofs are exclusively deferred to the very end of our paper deep in the appendices, and certainly not treated as a central focus. And for reference, we concisely list our central innovations and their consequences on the bulleted paragraphs beginning on Line 60 as well as pointers to other sections therein. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal that addressed most of my concerns. I will keep my score and tend to accept this paper. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's acknowledgement that our rebuttal addressed most concerns. And if any lingering issue remains, we are happy to address before the discussion periods ends.
null
null
null
null
null
null
Relational Conformal Prediction for Correlated Time Series
Accept (poster)
Summary: This paper introduces a method for improving uncertainty quantification in time series forecasting by leveraging correlations between sequences. The authors propose Conformal Relational Prediction (COREL) that integrates conformal prediction and quantile regression with graph-based deep learning. The method captures relationships between time-series without requiring prior knowledge of the graph structure. It also adapts to non-exchangeable data and time series variations. Claims And Evidence: I appreciate the authors' effort in comparing their method against multiple baseline approaches across different datasets. However, the results in Table 1 are challenging to interpret. Notably, CoREL consistently fails to achieve the targeted marginal coverage across datasets. As a result, it is unclear whether the smaller prediction interval widths (or lower Winkler scores) reflect the superiority of the proposed method or simply stem from its anti-conservativeness. Methods And Evaluation Criteria: The metrics and dataset considered to evaluate the method make sense for the application at hand. Nevertheless, the paper focus on the prediction interval widths, which is inherently due to the fact the method gives confidence regions (in R^N where N is the number of time series) which are cubes. In my opinion, this is one limitation of the method. Please refer to the section "Questions to for authors" for more details). Theoretical Claims: There is no contribution on the theoretical side in the paper. Proposition 3.1 follows from the definition of the total variation distance. Experimental Designs Or Analyses: The method proposed by the authors learn the graph topology used in the GNN to estimate quantiles from the residuals on the calibration set. Nevertheless, the authors do not discuss the learned graph topology (e.g. from in interpretability viewpoint). I would have been also interesting to conduct an ablation study to see if learning the graph really helps compared to considering a Supplementary Material: I read the whole supplementary material, which provides additional information and details on the datasets and softwares used, on the experiments performed (such as hyperparameter selection). Relation To Broader Scientific Literature: The paper is at the crossroad of several important topic in the field of uncertainty quantification. The propose method aims to quantify uncertainty when dealing with spatio-temporal data in a multivariate settings. Most existing methods in the litterature on conformal prediction typically focus on one of these challenges (i.e. methods designed to deal with multivariate output data, or spatial data or non exchangeable data). Nevertheless, the way the paper is written gives the impression that the introduced method is a conformal prediction approach. However, the method lacks a theoretical guarantee of non-asymptotic valid coverage. Therefore, claiming that it falls within the conformal framework is not justified, in my opinion. Essential References Not Discussed: Relevant related works are cited as far as I am aware. Other Strengths And Weaknesses: - Other Comments Or Suggestions: Here are some typos: - Eq (9): h_t^L should be H_t^L. - In Eq.(9), should it be \hat y^{i}_t and $h^{i,L}_t$ ? Questions For Authors: I would be grateful if the authors could clarify the following aspects: - Why only the residuals are provided as input in the RelQPs and not providng also the history of the time-series themselves ? - Could the authors comment on the motivation to feed to the decoder the time series independently rather than having still a single decoder but that would take as input the matrix Z_t to predict quantiles in Eq.(12)? - The final conformal sets are cubes (cf. Eq. (14)). This form of conformal sets appears suboptimal, as it does not leverage the correlation structure learned by the RelQPs between the time series to produce a confidence region that adapts to the dependence structure of the data rather than remaining cubic. - In the paper, the authors explain that "the relational structure is sparse" (between the N time series). The way this sparsity prior is used as an inductive bias in the model architecture and/or the optimization scheme is not very clear to me. It seems that this question is touched in Appendix D but there were not enough details for me to understand completely. Could the authors provide more information on this sparsity aspect ? - The way the adaptive version of the method works is not clear to me. In Section 5.3, it is stated that "we split the test set in K = 6 folds and then iteratively evaluated and fine-tuned the model on each fold." How to use the method is practice (since in real aplication, the test set is in the future and we don't have measurements) ? In my understanding, people would need to split the calibration set into two parts: one used to train RelQPs and the used to fine-tune the learnable node embeddings V. Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the review, please find point-by-point answers below. >Unclear if smaller PIs (or lower Winkler) simply stem from anti-conservativeness. There might be a misunderstanding. The Winkler score (see Eq. 26 in the Appendix) encompasses both coverage and efficiency, i.e., a smaller PI width does not imply a lower Winkler score. Furthermore, CP methods cannot guarantee exact coverage in these scenarios due to non-exchangeability (see comment on guarantees below). > Discussion of the learned graph topology. Experiments on GPVAR and the inclusion of the CoRNN datasets go exactly in this direction. CoRel consistently outperforms the CoRNN baseline in Tab1 showing the effectiveness of the proposed designs. Results on GPVAR show that CoREL matches the performance achievable by accessing the ground truth graph and achieves UQ performance comparable to the theoretical optimum. We will also include visualizations of the learned graph (e.g., https://imgur.com/a/uJaYNnX) - see our answer to Rev oUbs for more discussion on these results on the GPVAR dataset. > No theoretical guarantees for non-asymptotic coverage: claiming that it falls within CP is not justified. Without any strong assumption, CP methods for TS forecasting cannot offer non-asymptotic coverage guarantees as TS data are, in general, not exchangeable (see [1]). As discussed in our paper, most CP methods for TS rely on learning either a model [2,4] or a reweighting scheme [1, 3]. The same is true for adaptive methods such [5]. CoRel belongs to the family of methods that rely on a model trained on non-conformity scores; it is distribution-free and can be applied on top of any base point predictor. > Typo eq. 9 Thanks, the correct version is: $$\hat y^i_{t} = Readout \left(h_t^{i,L}\right)$$ > Q1 Why only residuals as input and not also the history of the TS? There might be a misunderstanding, as stated in lines 195–198 (right), we also use the actual time series as part of the input. > Q2 Motivation to feed to the decoder TS independently rather than matrix Z_t in Eq.(12). Because: (1) it would result in a fully connected layer with a large number of learnable parameters likely to incur overfitting; (2) it would constrain the model to operate on the full set of TS at once while the proposed architecture can operate on any subset separately (more scalable); (3) it would defeat the purpose of using a message-passing architecture rather than a simple fully connected MLP; (4) while the UQ procedure accounts for past observations at correlated TS, prediction intervals for each TS are independent of the others (see below). Note that analogous parameter-sharing architectures, with message passing and sequence modeling operators, are at the core of graph-based processing for TS [6]. > Q3 Conformal sets are cubes (cf. Eq. (14)). This appears suboptimal, as it does not leverage the learned structure. There might be a misunderstanding, which might be due to the admittedly dense notation (we will make sure to improve it). Our framework leverages the learned correlation structure to account for dependencies w.r.t. **past** observations at correlated TS rather than to model the joint distribution of future observations. In particular, we model conditional probabilities as in Eq. 1 (i.e., we model $p(x_t^i|X_{<t}, U_{<t}) \forall i$ rather than $p(X_t|X_{<t}, U_{<t}))$. While modeling joint probabilities within our framework can be interesting, it is a different problem out of scope here. We will clarify this aspect and mention it as an important direction. Thanks for raising this point. > Q4 The way this sparsity prior is used is not clear to me. As explained in Sec. 3.2 we parametrize the graph learning module to learn at most K<<N edges for each node. This improves scalability and can also improve sample efficiency [7]. > Q5 The way the adaptive version of the method works is not clear to me. We train the entire model on the full calibration set and use adaptation at test time. Basically, at test time, we simulate a real-world scenario where new data become available over time and are used for fine-tuning. Instead of retraining the entire model (as in [2]), we just fine-tune the node embeddings. Instead of finetuning at each time step (high variance and high computational cost), we fine-tune the model every M time step. [1] Barber et al. “Conformal prediction beyond exchangeability” Ann. Statist. 2023\ [2] Xu et al. “Sequential Predictive Conformal Inference for Time Series” ICML 2023\ [3] Auer et al. “Conformal Prediction for Time Series with Modern Hopfield Networks” NeurIPS 2023\ [4] Xu et al. ”Conformal prediction interval for dynamic time-series” ICML 2021\ [5] Gibbs et al. “Adaptive conformal inference under distribution shift” NeurIPS 2021\ [6] Cini et al. “Graph deep learning for time series forecasting” arXiv 2023\ [7] Cini et al. “Sparse graph learning from spatiotemporal time series” JMLR 2023
Summary: The work introduces Conformal Relational Prediction (CoREL), a novel distribution-free uncertainty quantification method for time series forecasting that leverages graph deep learning. CoREL integrates spatiotemporal graph neural networks with conformal prediction (CP) to capture relational structures among correlated time series and estimate prediction intervals more effectively. ## update after rebuttal Thank you to the authors for the rebuttal. Most of my concerns have been adequately addressed and I feel it a good combination about graph representations with time series, so I have decided to raise my score. Claims And Evidence: The work provides strong empirical support for several key claims. CoREL’s ability to capture relational structure for uncertainty quantification is demonstrated through both real-world datasets and a controlled synthetic experiment, confirming that the learned relationships enhance prediction intervals. Additionally, CoRELs adaptive learning mechanism is tested on non-stationary data, showing that it can adjust dynamically to evolving time series patterns. However, as the paper claims that CoREL scales well and is computationally efficient. The method requires graph inference at every update step, which may introduce non-trivial overhead. Comparisons against standard CP methods in terms of runtime are needed to fully validate efficiency claims. Methods And Evaluation Criteria: The methods of leveraging GNN to capture relational structures in time series make sense overall. The evaluation criteria in experiments such as the coverage rate and prediction interval width aligns with conformal prediction and prior works. Theoretical Claims: The theoretical guarantee for Proposition 3.1 is correct. Experimental Designs Or Analyses: The experimental design of CoREL demonstrates several strengths, particularly in its comprehensive evaluation across multiple real-world time series datasets. The inclusion of diverse baselines ensures that CoREL's performance is rigorously benchmarked against existing conformal prediction methods. The study also examines the adaptability of CoREL in non-stationary settings by performing cross-validation and fine-tuning, demonstrating its ability to refine uncertainty estimates dynamically. Supplementary Material: This paper does not include supplementary material. However, the Appendix provides anonymous code, which is reasonable. Relation To Broader Scientific Literature: CoREL contributes to the broader scientific landscape by bridging conformal prediction, graph-based deep learning, and uncertainty quantification in structured data. Traditional conformal methods assume independence, limiting their effectiveness in relational settings. By dynamically learning inter-series dependencies, CoREL aligns with advances in spatiotemporal modeling, probabilistic inference, and adaptive learning in graph-based frameworks. Its approach connects with broader efforts in representation learning, self-supervised graph modeling, and uncertainty-aware neural architectures, highlighting the growing need for flexible, data-driven methods that generalize across domains. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. CoREL effectively integrates conformal prediction with graph learning, capturing dependencies in correlated time series. It avoids strong distributional assumptions and dynamically adapts to non-stationary data. 2. The model infers inter-series relationships, eliminating the need for predefined graphs. 3. Comprehensive Evaluation – Tested across diverse datasets with ablation studies, reinforcing its empirical validity. Weaknesses: 1. Graph inference and message passing may introduce scalability issues. 2. While the model dynamically learns dependencies, there is no formal guarantee that the learned graph structure is optimal for uncertainty quantification, though it proves effective in experiments. Visualizations or comparisons could strengthen this aspect. 3. While CoREL is compared to standard CP methods, it does not benchmark against other graph-based uncertainty quantification approaches, such as Bayesian graph neural networks or CP for GNNs [1]. It would be interesting to compare and discuss the results here. [1] Zhao, T., Kang, J., & Cheng, L. (2024, August). Conformalized link prediction on graph neural networks. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 4490-4499). Other Comments Or Suggestions: The paper is well-structured, clear, and easy to follow. While there are some weaknesses, they do not significantly impact the overall quality. Questions For Authors: 1. Please refer to weaknesses. 2. I am not deeply familiar with the specifics of the datasets, but I wonder if they exhibit strong correlations. Are there any limitations to CoREL? Under what conditions might its performance degrade? A discussion on the method’s limitations would be valuable. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the review. Please find our comments below. > Comparisons against standard CP methods in terms of runtime are needed to fully validate efficiency claims. The main two baselines to compare to here would be SCPI and HopCPT. Scalability issues for SCPI comes from training a different model for each time series which results in high computational costs when dealing with multiple TS. Regarding HopCPT, the main computational bottleneck is due to attention scores being computed w.r.t. the considered sequence at each time step which results in a quadratic complexity w.r.t. the sequence length. Conversely, CoRel shares most of the parameters among the time series collection and its training can be efficiently parallelized on a GPU. Furthermore, it relies only on a short window of the most recent observations + node embeddings at each time step. If we consider LA (which is the smallest dataset in terms of numbers of time series) training and testing require **~3 days for SCPI , ~11 hours for HopCPT and ~5 minutes for CoRel**. We will put more emphasis on this aspect in the paper. > W1 Graph inference and message passing may introduce scalability issues. Scalability is inherently an issue when dealing with multiple TS simultaneously, so it can be a challenge, but not a limitation of our method. Additionally, graph inference and message-passing layers can have different computational complexity, depending on how the different components are implemented. Research on scalable graph structure learning methods (e.g., [1]) and STGNN architectures is very active (e.g., see [2,3] for discussion on existing architectures); these approaches could be integrated in the framework. Furthermore, one can rely on graph subsampling techniques (e.g., [4]) to enable scalability. We will mention these points in the paper, thank you. > W2 No formal guarantee that the learned graph structure is optimal[...] Visualizations or comparisons could help. First of all, there is not a clear optimality criterion here as the graph is directly learned end-to-end with the quantile regressor and as such it will strongly depend on the actual implementation of the quantile network (e.g., no. of layers, no. of message-passing steps, and so on). Learning a graph here serves the purpose of obtaining good performance on the downstream task. That being said, the experiment on the synthetic GPVAR dataset (where the ground-truth graph is known a priori) shows that CoRel can match the performance of the model that has access to the true graph and in this scenario, the learned graph closely matches the ground truth. A visualization of the learned graph against the ground truth is available here: https://imgur.com/a/uJaYNnX. As shown by the picture the learned graph includes all the actual edges plus some additional links. As detailed in the paper, in GPVAR, data were created by injecting Gaussian noise into the diffusion process with a std of 0.4 which implies that having direct access to the data-generating process would allow one to obtain (asymptotically) a 90% marginal coverage with a PI width of **1.315**. CoRel achieves essentially perfect coverage with PI width of **1.329 $\pm$ .002 (close to the theoretical optimum)** while CoRNN requires a PI width of **1.63 $\pm$ 0.01** to obtain similar coverage. We will include these comments and visualizations of the learned adjacency matrix in the paper. Thank you for raising this point. > W3 No benchmark against other graph-based uncertainty quantification approaches such as [1]. It would be interesting to compare and discuss the results here. We discuss several graph-based approaches for uncertainty quantification in the related works, but, as noted in the paper, none of these are post hoc methods and, as such, cannot be directly compared to our approach. Existing CP approaches for graphs are also not applicable to TS processing, e.g., the referenced paper targets link prediction. Nonetheless, we agree that the discussion on this aspect can be extended further, and we will include [1] as a relevant reference. > Q2 Not deeply familiar with the specifics of the datasets [...] Are there any limitations to CoRel? Under what conditions might its performance degrade? A discussion on limitations would be valuable. The considered datasets are well-known in correlated TS forecasting and have been extensively used in the related literature. CoRel relies on the assumption of the existence of a Granger causality among TS (as mentioned in the paper), if that is not the case we can expect its performance to be analogous to that of the CoRNN baseline. Indeed, as shown in the experiments, the gap in performance between CoRel and CoRNN is smaller when using an STGNN as a base model, as the STGNN already partially captures relational dependencies. In the updated paper, we will comment on these aspects and provide additional details on future work (see rebuttals to other reviewers).
Summary: The paper presents Conformal Relational Prediction (COREL), which integrating graph deep learning (GDL) operators into the CP framework, allowing relational structures to improve uncertainty estimation for spatio-temporal time series. The method utilizes a STGNN to provide structural embeddings and applies quantile regression to produce the PIs. The algorithm can be modified to become adaptive to data shifts (by modifying local parameters). Strong experimental results supports authors claims. Claims And Evidence: Yes, the claims are well-supported. Methods And Evaluation Criteria: Great, no notes. Theoretical Claims: Correct but not the most illuminating. See question section. Experimental Designs Or Analyses: Comprehensive baseline selection, good metrics, nice diverse datasets. Experiment section is well done. Supplementary Material: yes, I read all. Relation To Broader Scientific Literature: GNN structure-informed conformal quantile regression has not been explored before in literature. Given the importance of GNN in modeling Spatiotemporal data, this is very useful, as demonstrated by the authors via experiments. Essential References Not Discussed: should also cite the original conformal quantile regression paper: @article{romano2019conformalized, title={Conformalized quantile regression}, author={Romano, Yaniv and Patterson, Evan and Candes, Emmanuel}, journal={Advances in neural information processing systems}, volume={32}, year={2019} } Other Strengths And Weaknesses: Strong paper for uncertainty quantification. Good method, well-motivated and well-presented. Other Comments Or Suggestions: typos: - abstract line 38: "archives state-of-the-art uncertainty quantification" → "achieves state-of-the-art uncertainty quantification". - Method by Xu & Xie 2023b is SPCI (sequential predictive conformal inference) instead of SCPI... Questions For Authors: Prop 3.1 is the main theoretical support for the method. However, it blankets everything that could undermine the conformal guarantee under the total variation. This is not great because in reality it's hard to estimate or control TV (since it is the sup over the entire space). So there is really no guarantees other than "let's hope our quantile regression works well". The authors wrote, following prop 3.1 > By making assumptions on the expressivity of the quantile regressor in Eq. 13 and on the stationarity process (e.g., by assuming a strongly mixing process), we can expect the total variation between the learned and true distribution to shrink asymptotically as the size of the calibration set increases but provided no proof to support this claim. (I know this is a common argument, but even supporting citations will help, a proof with the specific setup will be better. ) Most previous conformal work that incorporates quantile regression, from CQR to SPCI and [this paper](https://openreview.net/forum?id=oP7arLOWix) (also) by Lee, Xu and Xie, contains an actual validity proof. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the review! Please find our answers to your questions below. > Missing reference. We agree it is a relevant reference. We will include it in the updated version of the paper. Thank you. > Typos. Thank you for spotting those! > Prop 3.1 [...] blankets everything that could undermine the conformal guarantee under the total variation. [...] The authors claim that [we can expect the total variation between the learned and true distribution to shrink asymptotically] but provided no proof to support this claim. [...] this is a common argument, but even supporting citations will help, a proof with the specific setup will be better. Thank you for the comment. As noted, a proof would require assumptions on the data generation process and on the specific implementation of the quantile regressor and message-passing operators. We feel this would not add a lot to the paper since, as you noted, a similar analysis has been done in recent works by Lee, Xu, and Xie. We do, however, agree on the necessity to further discuss this aspect and the related references; we will do this in the updated version. Thanks again for the comment.
Summary: This work is on uncertainty quantification in time series forecasting. The authors proposed a conformal prediction method based on graph neural networks. Their approach is based on quantile regression. A spatiotemporal graph neural network was trained on the residuals of the calibration dataset to predict the quantiles of the error distribution. The predicted residuals are used to learn the quantile function of the error distribution at each time step. The performance metrics used were: the difference between the specified confidence level and the observed coverage on the test set; the width of the PI and the Winkler score. The proposed approach was tested on three benchmark datasets and the proposed model outperforms competitors in terms of Winkler score for most of the test cases. Its performance is comparable to other models based on the other two metrics. ## update after rebuttal The explanations provided by the authors and the changes they intend to make are satisfactory. However, I would stay with the rating recommended earlier. Claims And Evidence: The results presented show that the proposed model is better than other models based on Winkler score but other models perform as well or better based on the other two metrics. Hence, once cannot completley claim that the proposed model is superior to previous models. Methods And Evaluation Criteria: The proposed method and the evaluation criteria are appropriate. But, since the authors claim that this work is mainly addresses the spatio-temporal relationships in multivariate time series data, it would have been better if they clearly demonstrated the value of this method with detailed explanation and results at least for one of the test cases, showing how taking spatio-temporal relationships has provided additional insights into the behavior of the system. Theoretical Claims: The authors made a theoretical proposition and also provided a short proof for the same (Section 3.3 and Appendix A) Experimental Designs Or Analyses: I did not see any experimental designs or analyses of the same or results of such experimental designs. The authos have essentially tested their proposed method on three standard benchmarks and presented the results. Supplementary Material: The authors provided additional details of their work in the Appendix. They contain a short proof of a propositon they made, and details of hardware and software used, datasets, implementation and performance metrics. Relation To Broader Scientific Literature: This work could be considered an incremental contribution to uncertainty estimation in multivariate time series forecasting. It is certainly of value but needs further testing and evaluation in detail for specific use cases to see if it has real practical utility. Essential References Not Discussed: The main contribution of this work in the development of STGNN for conformal prediction. The authors cited relevant literature and provided appropriate references. Fairly decent review of the previous work was provided in Section 4. The authors identified the shortcomings of the previous works in terms of STGNNS and CP and cited relevant literature. Other Strengths And Weaknesses: Both STGNNs and conformal prediction methods exist. But where the current work comes out good is in introducing quantile regression for predicting the quantiles of the error distribution of an existing model, rather than on forecasting the target variable. The proposed approach appears to be more adaptive. The local components of the model are updated over time while retaining the global relationships. However, the authors focused only on univariate analysis. Ideally, one should demonstrate the approach with multivariate time series. Secondly, while this approach is different from previous works, it's important to demonstate how predicting the quantiles of the error distribution is better than forecasting the target variables. The advantage has to be shown quantitatively. The authors demonstrated the approach on three benchmarks and provided the standard test results. One of the main drawbacks of this work as well as similar such works is, one does not know actual utility of this work to an application engineer. How does this work make a difference in real life? Would a one or two pecent improvement in Winkler score make a great difference to uncertainty estimation and can it alter final decisionmaking? Are the improvements shown in performance metrics with one or two percent improvements, and in some cases, just in the third or fourth decimal point, make any difference to the end user? Other Comments Or Suggestions: It is important to quantitatively demonstrate the value of conformal prediction with quantile regression over forecasting target variables. The comparison has to be on the approach rather than the STGNN model. Secondly, it would be better if the authors take a real life example, and demonstrate the value of the work rather than the standard performance metrics showing improvement with a few percent points or improvement in third or fourth decimal. Such works do not really make any difference to practising engineers. Similarly, since this work is on STGNN, it's important to demonstrate the true value with a multivariate time series forecasting test case and provide insights on how incorporating dynamic spatio-temporal realtionships make a difference in uncertainty estimation. Questions For Authors: Please see the earlier comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review! Please find our point-by-point answers below. > Results show that the proposed model is better on Winkler score but not on the other two metrics. Hence, cannot claim that the proposed model is superior. Coverage and PI Width on their own do not say much about UQ performance, as one can trivially get high coverage with an arbitrarily large prediction interval or get small PI Width with a narrow, but not valid, one. Doing well on both aspects at the same time is the challenge: the Winkler score (see Appendix) considers coverage and PI width at the same time. We will add this comment to the paper. > It would have been important to show how modeling spatio-temporal relationships has provided additional insights. The inclusion of the CoRNN baseline which does not include any message passing or graph learning component, provides the insights asked by the reviewer by showing the impact of the proposed designs. Furthermore, experiments on the synthetic GPVAR dataset (where spatial dependencies determine the observed dynamics) show that CoRel can match the performance of a model that has access to the ground-truth graph and obtain UQ performance close to the theoretical optimum (see rebuttal to Rev oUbs). Additionally, there is a wide literature on how modeling and learning relational dependencies is beneficial in TS (e.g., see [1,2]). Our framework allows one to benefit from such representations in post hoc UQ. > I did not see any experimental designs or analyses of the same or results of such experimental designs. There might be a misunderstanding. We designed a set of benchmarks for assessing post hoc UQ quantification on correlated TS. The selected datasets have already been used in the context of TS forecasting, but their use in the context of UQ is new. This also required setting up different base models to test UQ methods in different scenarios for each dataset. Furthermore, we also included a synthetic dataset to further validate the proposed designs. We believe that experiments do show the effectiveness of CoRel. > The authors focused on univariate analysis. Ideally, one should demonstrate the approach with multivariate time series. Our approach can be extended to collections of multivariate TS by pairing the proposed framework with a quantile regressor able to handle multivariate data at each node, e.g., [3], or by simply using a separate quantile regressor for each output channel. Note, however, that this is orthogonal to the proposed approach. We will include a comment on this in the paper. > While the approach is different from previous works, it's important to quantitatively demonstrate how predicting the quantiles of the error distribution is better than forecasting the target. There might be a misunderstanding. As with any CP method, CoRel quantifies the uncertainty of the predictions of a base pre-trained model. This is a fundamentally different problem from learning a probabilistic predictor directly. It is not possible to set up a direct comparison as the results would heavily depend on the base predictor. This difference is pointed out in Sec. 1 and 4, we will make sure to further emphasize this aspect. > One does not know actual utility of this work to an application engineer. How does this work make a difference in real life? We agree on the importance of engineering applications. The usefulness of more accurate UQ (and CP methods in particular) in real-world applications is widely recognized [4]. However assessing the practical impact of a particular method in a specific application and downstream task is orthogonal to our contribution, which is on fundamental research in ML and statistics. > Would a 1 or 2 %improvement in Winkler score make a great difference? Any difference for the user? There may be a misunderstanding. Differences in Winkler score are quite significant in all the considered scenarios (even more than 10% in some cases) and a 10% difference in performance is highly significant in any engineering application. As discussed in previous answers, coverage and PI width should be considered together. For an example of how CoRel's prediction intervals look compared to the CoRNN baseline in a given scenario (LA dataset with an RNN base model), see https://imgur.com/a/UYIj0nR. Additionally, we believe that the consistent performance improvements and methodological novelty make our paper a relevant contribution on its own, besides specific applications. The analysis of specific applications in science and engineering is out of scope and would require separate studies. [1] Jin et al. “A survey on graph neural networks for time series” TPAMI 2024\ [2] Cini et al. “Graph deep learning for time series forecasting” arXiv 2023\ [3] Feldman et al. “Calibrated Multiple-Output Quantile Regression with Representation Learning” JMLR 2023\ [4] Smith, “Uncertainty Quantification: Theory, Implementation, and Applications” SIAM 2013 --- Rebuttal Comment 1.1: Comment: While I agree with the explanations provided by the authors, the inherent weaknesses of the work still remain, for ex., demonstration with univariate data. The theoretical contribution is only marginal and not giving enough importance to demonstration with real-life examples. Hence, the overall recommendation will remain the same. --- Reply to Comment 1.1.1: Comment: Thank you for the comment. We genuinely respect your opinion, but we think the paper already makes relevant contributions and addresses many aspects, e.g., 1) conformal prediction on collections of correlated time series, 2) graph-based quantile regression from residuals, 3) probabilistic latent graph learning, 4) hybrid global-local time series processing, 5) adaptation and fine-tuning of a global-local UQ model and more. Given the necessity of keeping the scope of a conference paper contained, we feel that addressing real-world applications more than what we already do (we use datasets coming from 3 different very practical applications--traffic, air quality, and energy analytics) would dilute the content of the paper too much. Similar comments can be made for including multivariate settings. Thank you again for the valuable feedback and for reviewing the paper!
Summary: The paper introduces Conformal Relational Prediction (COREL), a novel approach for uncertainty quantification in correlated time series forecasting using graph deep learning frameworks. COREL overcomes the data exchangeability limitation by employing a spatiotemporal graph neural network (STGNN) to model relationships among multiple time series based on past residuals, enabling the estimation of the quantile function for prediction errors. Unlike existing methods that treat time series independently, COREL captures dependencies among correlated series by conditioning uncertainty estimates on neighboring time series through a learned graph structure. Additionally, it incorporates an adaptive mechanism to handle non-stationary inputs, enhancing prediction intervals (PIs) across varying conditions. Empirical evaluations show that COREL achieves state-of-the-art performance across multiple benchmarks, effectively quantifying uncertainty while preserving high predictive accuracy. Claims And Evidence: The claims in the submission are well-supported by clear and compelling evidence, particularly regarding the performance of the Conformal Relational Prediction (COREL) method. The authors provide empirical results demonstrating that COREL outperforms existing conformal prediction approaches across multiple benchmarks. The paper also details the algorithmic framework of COREL, highlighting its use of a spatiotemporal graph neural network to capture relationships among time series. Additionally, the inclusion of an adaptive mechanism for handling non-stationary inputs is emphasized as a key contribution, with validation through experimental results. However, further detailed comparisons with a larger set of existing methods and additional concrete examples of scenarios where COREL outperforms other methods can bolster the robustness. Methods And Evaluation Criteria: Yes, it generally does. Theoretical Claims: The proof for Prop. 3.1 looks good without any obvious issues. Experimental Designs Or Analyses: 1. The paper presents sound experimental designs to validate the effectiveness of COREL, notably through comparative experiments against several baseline methods such as SCP, SeqCP, and SCPI across diverse datasets. This comparative approach is valid but could be strengthened by including a wider variety of baseline models and clarifying their selection rationale. 2. The use of the Winkler score and coverage metrics is appropriate. 3. The controlled environment testing using a graph diffusion process adds rigor but assumes the simulation accurately represents real-world scenarios, which may not always hold. The adaptability experiments evaluating COREL's performance in non-stationary settings are crucial, but more detail on the fine-tuning methods used would enhance transparency. Supplementary Material: Yes. All. Relation To Broader Scientific Literature: The key contributions of the paper, particularly the introduction of COREL for uncertainty quantification in correlated time series using graph deep learning, are closely related to the existing literature in several ways. Firstly, the application of GDL to CP for time seriestraditional. Conformal prediction (CP) methods, which often assume exchangeability, have been extended to account for time series data, showing limitations in their application due to non-stationarity in real-world scenarios. COREL builds on these findings by integrating a graph structure learning module to capture spatiotemporal dependencies, which has been noted as beneficial for improving prediction accuracy in various applications. Moreover, COREL's reliance on quantile regression aligns with established statistical frameworks for probabilistic forecasting, while addressing the shortcomings of existing CP techniques that often operate independently on univariate time series. By allowing for collective learning across multiple related time series, COREL advances the state of the art in uncertainty quantification methods, demonstrating effectiveness over traditional approaches and laying the groundwork for future explorations in spatiotemporal CP frameworks. Essential References Not Discussed: None that I can think of now. Other Strengths And Weaknesses: Strengths: 1. COREL introduces a new conformal prediction method that effectively utilizes graph deep learning to quantify uncertainty in correlated time series. 2. The model captures relationships among time series through a graph structure, enhancing forecasting accuracy by leveraging spatiotemporal dependencies. 3. Empirical results demonstrate that COREL achieves state-of-the-art performance compared to existing CP approaches across multiple datasets and scenarios. 4. COREL can be applied to residuals from any point forecasting model, even those that do not consider relationships among input time series, allowing for broader applicability. 5. The inclusion of node-level parameters that adapt to changes in target sequences addresses non-stationarities effectively. Weaknesses: 1. The integration of graph deep learning may increase computational complexity and resource requirements, potentially limiting its practical deployment in resource-constrained environments. 2. The paper could provide more clarity on the assumptions made about the structure and nature of the underlying time series data, which may affect the generalizability of the findings. 3. While adaptability is addressed, the actual performance in highly volatile or non-stationary conditions is not thoroughly evaluated and discussed. Other Comments Or Suggestions: No alarming typos. Questions For Authors: Authors can possibly further explore how well their method performs in broader non-stationary conditions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the review please find our comments below. > Further comparisons with existing methods and additional concrete examples of scenarios where COREL outperforms other methods can bolster the robustness. [...] This comparative approach is valid but could be strengthened by including a wider variety of baseline models and clarifying their selection rationale. Thank you for the comment. The included baselines are representative of the current state of the art and existing approaches to the problem; SCPI and HopCPT, in particular, are modern and very competitive baselines. The paper includes results from 3 datasets and 3 different base models together with 1 synthetic dataset with 2 base models for a total of 11 different benchmark scenarios. We will also include the compute time of the different baselines (see our rebuttal to Rev oUbs). We believe this is an adequate evaluation setup as it highlights the strengths and weaknesses of the different methods in different settings. Nevertheless, we agree that the motivations for choosing the selected baselines could have been discussed more. We will include these considerations in the paper. > Controlled environment adds rigor but assumes the simulation accurately represents real-world scenarios, which may not always hold. There might be a misunderstanding. The objective of the experiment on the synthetic dataset is not to mimic a real-world scenario but to show that CoRel can learn and exploit latent relationships among time series. Indeed, its performance here matches that achievable by a model with access to the ground truth relational structure, and its UQ performance is close to the theoretical optimum. For further discussion and analysis, see also our comments on the GPVAR experiment in response to Rev oUbs. > The adaptability experiments evaluating COREL's performance in non-stationary settings are crucial, but more detail on the fine-tuning methods used would enhance transparency. Details of the fine-tuning procedure are provided in the appendix. In short, embeddings are updated every M time steps by running the training procedure on the latest observations and keeping all the parameters frozen except for the embeddings. Are there specific aspects that you believe would benefit from more discussion? We will move part of the description of the adaptation procedure to the main body of the paper and we are available for further clarifications. > W1 The integration of graph deep learning may increase computational complexity and resource requirements, potentially limiting its practical deployment in resource-constrained environments. The additional computational complexity is inherent in operating on multiple TS simultaneously. Thus, it is more a challenge of the problem settings rather than an issue with our approach. Furthermore, CoRel is much more scalable than the baselines in these settings as discussed in the rebuttal to Reviewer oUbs’ comments. There are also techniques to make the graph-based processing scalable that could be applied here as well. These techniques range from graph sub-sampling (e.g., [1,2]) to scalable architectures (e.g., [3,4]). We will include a discussion of this aspect in the paper. > W2 The paper could provide more clarity on the assumptions made about the structure and nature of the underlying data, which may affect generalizability. We do not rely on strong assumptions about the correlated TS besides the existence of Granger causality among them for the graph-based approach to be beneficial. Stationarity assumptions are also discussed in the paper. The sparsity assumption is more of an inductive bias/regularization for learning a sparse graph. If the sparsity assumption is deemed to be unrealistic for a problem at hand, it would be possible to use operators to learn a dense graph, e.g., by modeling each edge as a Bernoulli RV. We will add a comment on this aspect in the paper. > W3/Q1 While adaptability is addressed [...] authors can possibly further explore how well their method performs in broader non-stationary conditions. We provide a simple and effective method to make our approach adaptive and show its effectiveness in 3 different scenarios. However, adaptability is not the central focus of our work, which focuses on CP on related TS. We believe that a dedicated study would be needed to address non-stationary settings in depth, as we discuss in the future work section. We will emphasize the importance of this direction in the updated paper. [1] Hamilton et al. “Inductive representation learning on large graphs” NeurIPS 2017\ [2] Chiang et al. “Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks” KDD 2019\ [3] Frasca et al. “SIGN: Scalable Inception Graph Neural Networks” arXiv 2020\ [4] Cini et al. “Scalable Spatiotemporal Graph Neural Networks” AAAI 2023
null
null
null
null
Semi-gradient DICE for Offline Constrained Reinforcement Learning
Reject
Summary: This paper investigates the limitations of SemiDICE in offline constrained reinforcement learning, revealing that it outputs policy corrections rather than stationary distribution corrections. This fundamental flaw significantly impairs SemiDICE's effectiveness in off-policy evaluation (OPE). Based on these findings, CORSDICE is proposed to facilitates OPE by computing state stationary distribution corrections. ## update after rebuttal Thank the authors for offering the empirical evidence. I have no further questions and will keep my score. Claims And Evidence: While the majority of the claims presented in this paper are substantiated by clear and compelling evidence, one particular assertion in Section 4 (Page 4) requires further clarification. The authors state that "there often exists a state s such that $d_{\pi^{\star}}(s, a) = 0$ for all a in OptiDICE." However, upon careful examination, I was unable to locate any empirical evidence or theoretical justification supporting this specific claim within the manuscript. Methods And Evaluation Criteria: It makes sense for problem. Theoretical Claims: I’ve checked the correctness of theorem proof mentioned in the main paper. Experimental Designs Or Analyses: The experimental design demonstrates both comprehensiveness and persuasiveness, reflecting the authors' substantial efforts in empirically validating CORSDICE. The experimental evaluation comprises three comprehensive components. The initial component systematically examines algorithmic characteristics, providing empirical validation for the limitations of existing DICE methods as discussed in Section 4. The subsequent component demonstrates CORSDICE's superior capability in achieving precise OPE. The third component evaluates CORSDICE's performance among benchmarks in offline constrained reinforcement learning. Supplementary Material: The Supplementary Material includes the code for CORSDICE; however, I have not executed it due to time constraints. Relation To Broader Scientific Literature: The paper is well-grounded in the broader literature. Essential References Not Discussed: No Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: The question I asked in the claims and Evidence part. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s comment and provide empirical evidence in Section 4 to better substantiate our work. ## Q1. Empirical evidence on the claim from Section 4 We demonstrate that OptiDICE can yield a state $s$ where $d_{\pi^*}(s,a)=0\;\forall a$. The figure provided in https://imgur.com/a/7zh2zlg is based on the Four Rooms domain from Figure 1 of OptiDICE [1], using 1000 trajectories collected under a behavior policy with 0.7 optimality. In this setup, red arrows indicate the optimal policy, and the heatmap shows the state visitation frequency under the optimal policy, $\sum_{a}d_{\pi^*}(s,a)$. While all states are visited by the behavior policy, some states have no arrows, indicating they are not visited by the optimal policy ($\sum_{a}d_{\pi^*}(s,a)\approx 0$). Consequently, the optimal policy in those states cannot be recovered by computing $d_{\pi^*}(s,a)/\sum_{a}d_{\pi^*}(s,a)$. We will include this demonstration in the final version. [1] Optidice: Offline policy optimization via stationary distribution correction estimation. ICML, 2021. --- Rebuttal Comment 1.1: Comment: Thank you for offering the empirical evidence. I have no further questions and will keep my score.
Summary: The paper developed a new offline RL algorithm that applies semi-gradient DICE, addressing the challenge of constraint violation when applying semi-gradient DICE in the context of constrained RL. The paper provides theoretical analysis on the characteristics of the correction term (i.e., the ratio of the stationary distribution w.r.t the learning policy to that w.r.t the dataset policy) that lead to the violation of the Bellman equation (thus the constraint violation). The paper then proposes a stationary distribution correction idea to address the issue. Experiments on several benchmarks show the advantages of the proposed algorithm CORDICE in comparison with other offline RL algorithms. Claims And Evidence: The advantages of the proposed algorithm CORDICE are supported by both theoretical and experimental results. In particular, the paper provides substantial empirical analysis on different aspects (i.e.,violations of Bellman and policy correction constraints) and performance of CORDICE comparing with other baselines. Methods And Evaluation Criteria: The semi-DICE approach is well-suited for offline RL. Benchmarks such as the D4RL datasets or the DSRL datasets are commonly used in literature. Theoretical Claims: No, I didn't check the correctness of any proofs in the appendix. Experimental Designs Or Analyses: Experimental result analysis in the paper is extensive, examining the performance and characteristics of the proposed algorithm in various settings using both D4RL and DSRL datasets. Supplementary Material: No I didn't. Relation To Broader Scientific Literature: The paper develops a new offline RL algorithm, utilizing DICE --- a well-known framework used in RL literature. There is a long line of research that applies DICE to different settings, ranging from single-agent to multi-agent, from unconstrained to constrained RL. This work extends semi-gradient DICE to the context of constrained RL, offering insights that could be valuable to the RL community. Essential References Not Discussed: I am not aware of missing essential references. Other Strengths And Weaknesses: A key strength of the paper is a thorough theoretical analysis on properties of semi-gradient DICE in constrained RL, including the analysis on the violation of the stationary distribution and the connection with behavior-regularized offline RL. The proposed idea of extracting state stationary distribution to fix the cost violation is novel and well justified. Other Comments Or Suggestions: None. Questions For Authors: I don't have questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer’s acknowledgment of our contribution: extending semi-gradient DICE to constrained offline RL, grounded in theoretical analysis of its optimal solution and stationary distribution correction approach to address the issue of Bellman flow constraint violation.
Summary: This paper proposes a DICE-based algorithm for offline constrained RL. The proposed method can be seen as a SemiDICE version of COptiDICE, with some extra designs. The paper is generally well-written, but I also feel there is some overclaiming of contributions and a lack of adequate acknowledgment of existing works. The final safety control performance is not very impressive compared to some recent SOTA-safe offline RL algorithms. Claims And Evidence: - The paper claims to identify the root cause of the limitation when using the SemiDICE framework to perform OPE. The claim is generally supported by the theoretical analysis, although I think the description overstates some points. - The paper claims that it achieves state-of-the-art (SOTA) performance on DSRL benchmarks, however, by inspecting Table 2 and 3, many results for CORSDICE have somewhat high costs, and many are close to the cost threshold 1. This is not a desirable property for a safe policy. Methods And Evaluation Criteria: The method generally makes sense. The evaluation on the DSRL benchmark is also reasonable. Theoretical Claims: I think there are some overclaiming and lack of acknowledgments of existing works in the paper. - For Proposition 4.1, it is well-known that SemiDICE methods may not satisfy the Bellman flow constraint, since it replaces $(1-\gamma)E_{s_0\sim p_0}[v(s_0)]$ with $E_{(s,a)\sim d_D}[v(s_0)]$ and use the parameter $\alpha$, which causes the Bellman flow property to break apart. This is obvious and more or less mentioned in existing works, hence I do not think it is that new. - For the discussion in Section 4 about the connections to behavior-regularized offline RL, this actually has been thoroughly discussed in the ODICE paper [1] by Mao, et al. (2024). Actually, they derive their methods by first noticing the relationship between SemiDICE and behavior-regularized offline RL. This is not adequately acknowledged in the paper. - The introduction of another function approximator $A(s)$ for bias reduction is identical to the trick used in PORelDICE [2]. Again, this is never mentioned nor discussed in the description of the methodology, even if the authors actually cited the paper in the preliminary section. [1] Revealing the mysteries of distribution correction estimation. ICLR 2024. [2] Relaxed Stationary Distribution Correction Estimation for Improved Offine Policy Optimization. AAAI 2024. Experimental Designs Or Analyses: - The experiment design is generally reasonable, however, I feel the safety control performance of the proposed method is not very impressive. Although the proposed method can control cost value below the threshold, it often gets high cost values (close to the threshold), especially in Table 2. - Why many tasks in Table 2 are not tested and reported in Table 3? Supplementary Material: I've read the appendix of the paper, but have not run the supplementary code provided along with the paper. Relation To Broader Scientific Literature: Safe offline RL has broad applications in robotics, autonomous driving, and industry control. Essential References Not Discussed: See my comment in the Theoretical Claims section. Other Strengths And Weaknesses: **Strengths:** - The paper is generally well-written and easy-to-read. The key ideas are clearly conveyed and discussed. **Weaknesses:** - The paper actually borrowed lots of methodological designs and insights from existing papers, but does not adequately acknowledge them. For example, the problem framework is from CoptiDICE [1]; the insights and techniques of SemiDICE are from DualRL [2] and ODICE [3]; the insights between SemiDICE and behavior-regularized offline RL is from ODICE [3]; and the trick for bias reduction is from PORelDICE [4]. None of these are adequately stated in the paper. [1] COptiDICE: Offline Constrained Reinforcement Learning via Stationary Distribution Correction Estimation [2] Dual RL: Unification and New Methods for Reinforcement and Imitation Learning. ICLR 2024 [3] Revealing the mysteries of distribution correction estimation. ICLR 2024. [4] Relaxed Stationary Distribution Correction Estimation for Improved Offine Policy Optimization. AAAI 2024. Other Comments Or Suggestions: - From G.3, it seems the paper conducted heavy hyperparameter tuning to get good results. This is not encouraged in offline RL research. As most offline RL methods are designed to solve real-world problems that have restricted online system interaction during the training stage. Hence we do not have much opportunity for exhaustive hyperparameter tuning, and bad policy can cause severe consequences on real systems. It is desirable to report results based on fixed hyperparameters or only a small set of hyperparameters. Using random search and hyperparameter optimization to get nicer results is a bad practice in offline RL research. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough and constructive comments. We hope we can address your concerns below. ## Q1. Violation of Bellman flow (BF) constraint and the originality of Proposition 4.1 We respectfully disagree with the reviewer’s claim that replacing the term $(1-\gamma)p_0$​ with the dataset distribution $d_D(s)$ is a well-known violation of the BF constraint. To clarify, there are two distinct scenarios for this replacement, under the full-gradient DICE method: - Direct substitution: While replacing $(1-\gamma)p_0$ with $d_D$ does violate the BF constraint, the resulting d becomes a **scaled (due to removal of $1-\gamma$ factor)** stationary distribution under a modified MDP with initial state distribution $d_D$ **(due to $p_0\to d_D$)**. The constant scaling can be easily corrected during the policy extraction. - Assumption-based substitution (as used in SemiDICE, appx. B, Eq. (31-32)): Here, we assume $d_D$ satisfies the stationary distribution condition. This method does __not__ violate the Bellman flow constraint. Thus, we argue that the fundamental cause of BF violation is the use of semi-gradient update—not the replacement of the initial state distribution. To our knowledge, this is the first time this specific cause has been clearly identified. Furthermore, Proposition 4.1 does more than pinpoint the violation's cause. It shows that SemiDICE outputs the policy ratio, which is essential for applying our stationary distribution extraction technique. Without Proposition 4.1, this insight would not be possible. We believe this adds a meaningful contribution to the understanding of semi-gradient DICE methods. Finally, α only affects the conservatism strength of DICE method—it has no impact on the BF property. ## Q2. Lack of acknowledgement [1, 2] for discovering the connection Although we cited [1, 2] and briefly discussed them in appx. C, we will revise the manuscript to better acknowledge their discovery. While earlier studies implied the connection through similar loss functions, we make it more explicit by showing that a behavior-regularized MDP with a general f-divergence can be approximated by its corresponding SemiDICE. This insight is central to our work, as it justifies the SemiDICE policy ratio and supports its constrained extension—the main contribution of our paper. ## Q3. Lack of acknowledgement [3] for bias reduction technique While our bias reduction technique was inspired by PORelDICE [3], we approximate a different expectation. PORelDICE estimates $U(s,a)\approx r(s,a) +\gamma\sum_{s'}T(s'|s,a)\nu(s')$, while we approximate $A(s)\approx\sum_a\pi(a|s)(\gamma\sum_{s'}T(s'|s,a)\mu(s')-\mu(s))$. Consequently, PORelDICE focuses on reducing transition bias to improve offline RL, whereas our approach targets bias from both the transition dynamics and the policy to improve OPE. We will acknowledge [3] appropriately and clarify this distinction in the final version. ## Q4. High cost close to limit We respectfully clarify that obtaining cost values near the constraint threshold is not a flaw but a desirable behavior in our average-constrained optimization setting (CMDP, Eq. (7)). Our objective explicitly encourages using as much of the cost budget as needed to improve the return, so long as the expected constraint is satisfied. This is a principled difference from hard or probability-constrained formulations, where any thresholds violation is unacceptable. Recent works (e.g. [4], [5]) have studied these stricter formulas; our method is complementary and could be extended to them in the future. We will update the manuscript to make this distinction clearer. ## Q5. Hyperparameter tuning While G.3 lists all potential hyperparameters, we only tuned one—α—for CORSDICE. For D-CORSDICE, we tuned two additional hyperparameters (guidance scale values, the number of inference action values), which were directly adopted from D-DICE [6]. For inference samples, we used a smaller search space than in D-DICE (Table 4 of [6]). The original paper did not specify a search space for guidance scale, so we referred to the values in their official implementation. Importantly, all methods, including baselines, were tuned with the same hyperparameter search budget and procedure to ensure fair comparison. We will clarify these points in the revised manuscript. [1] Dual RL: Unification and New Methods for Reinforcement and Imitation Learning. ICLR 2024 [2] ODICE: Revealing the Mystery of Distribution Correction Estimation via Orthogonal-gradient Update. ICLR 2024 [3] Relaxed stationary distribution correction estimation for improved offline policy optimization. AAAI 2024 [4] Safe Offline Reinforcement Learning with Feasibility-Guided Diffusion Model. ICLR 2024 [5] Quantile Constrained Reinforcement Learning: A Reinforcement Learning Framework Constraining Outage Probability. NeurIPS 2022 [6] Diffusion-DICE: In-Sample Diffusion Guidance for Offline Reinforcement Learning. NeurIPS 2025
null
null
null
null
null
null
null
null
Scaling Inference-Efficient Language Models
Accept (poster)
Summary: This paper proposes to modify the chinchilla scaling laws to also include the model aspect ratio (embedding dim / number of layers) into the scaling law. This accounts for the fact that wider and shallower models are faster in inference. Additionally the paper suggests to include the latency as key metrics for inference into the model selection criteria. Finally, the authors train a wider and shallower 1B parameter model which has a lower latency than comparable models. Claims And Evidence: Claims: > Models of similar training loss exhibit gaps in downstream evaluation. > Also L. 88: “the disparity between model loss and accuracy in downstream tasks”. - This statement contradicts findings in other papers ([1], [2]), which find that model loss is a good proxy for aggregate metrics on downstream tasks. - This paper only looks at 3 downstream tasks and find that the have different correlations with the loss. For such general statements more downstream tasks should be considered and then very likely the observation as in [1] and [2] is confirmed. > L.130 (Figure 2): smaller models can sometimes exhibit higher inference latencies than larger models - Just the reference to Figure 2 is not enough. Why should this model be slower than a 14B model. I suspect problems with the Huggingface implementation. Needs more details and explanation > The authors claim that their scaling law is inference aware. - There is only an indirect relation to the inference efficiency: The authors find that wider, more shallow models have a smaller latency. > The paper asks the question: Given dataset and parameter constraints, can we train an inference-efficient and accurate model for downstream tasks? - In my view this question is too general. If I had to train an inference efficient model, I would look for more architecture interventions than just the aspect ratio, e.g. Linear Attention, Hybrids, MLA, etc. - Either the paper has to demonstrate that their law (or modification to the law) holds for other attention variants, like Linear Attention, MLA or GQA or limit their scope. [1] Sardana, Nikhil, et al. "Beyond chinchilla-optimal: Accounting for inference in language model scaling laws." arXiv preprint arXiv:2401.00448 (2023). [2] Gadre, Samir Yitzhak, et al. "Language models scale reliably with over-training and on downstream tasks." arXiv preprint arXiv:2403.08540 (2024). Methods And Evaluation Criteria: The paper uses the MSE, R^2 value and the spearman correlation to compare the fits of the different scaling laws. I find these metrics suitable for comparing the curve fits. However, I would expect some standard scaling law plots that show the loss over number of FLOPs or number of parameters. L. 202: It is not obvious that the "most suitable model shape adjustment is the inclusion of the term $(1 + \epsilon R^\gamma)$. In my opinion this needs more discussion and motivation. Are there other possibilities to include the model shape? Theoretical Claims: N/A Experimental Designs Or Analyses: - Details on the latency measurements are missing. Is torch.compile used? Are other optimizations conducted, e.g. torch CUDA graphs ? - Details on the FLOP calculation used in this paper are missing. Are embedding FLOPs counted? What about last linear layer (i.e. unembedding) FLOPs? - I did not find the final results of the fits in their paper, i.e. the values of the coefficients. It would be interesting how much their "chinchilla" fit deviates from the original "chinchilla" fit. This could help to assess how the papers experimental setup which contains models from 80M to 1B relates to the original "chinchilla" setup. - The paper includes some ablations on the scaling law fits, where the data points for fitting the scaling law are selected differently. - I would have expected additional ablations on different ways on accounting for the model aspect ratio, as this is the core contribution of this paper. Supplementary Material: N/A Relation To Broader Scientific Literature: In addition to the number of model parameters, the key contribution of the paper is a new scaling law formulation that incorporates the aspect ratio (embedding dim / num layers) of the model architecture into the scaling law. So while previous scaling law formulations (e.g. Chinchilla) depend on the number of parameters too, they remain unspecific about the allocation of the parameters in the model. Concurrent works fix this ratio for scaling law experiments [3]. [3] Busbridge, Dan, et al. "Distillation Scaling Laws." arXiv preprint arXiv:2502.08606 (2025). Essential References Not Discussed: A recent work which studies the discrepancies between the Chinchilla Scaling law and the Hoffmann scaling law is relevant [4]. [4] Porian, T., Wortsman, M., Jitsev, J., Schmidt, L., & Carmon, Y. (2025). Resolving discrepancies in compute-optimal scaling of language models. Advances in Neural Information Processing Systems, 37, 100535-100570. Other Strengths And Weaknesses: Strengths: - The paper studies the important question of how to build inference efficient language models. - The paper studies the impact of the aspect ratio embedding dim / num layers on scaling laws. - The paper tries to include key metrics for inference such as latency into scaling laws for language models. Weaknesses: - Only very small models are investigated. - It is not clear whether wider but more shallow models also perform that well in very large models (e.g. >7B parameters). - The paper proposes to include the aspect ratio (embedding dim / number of layers) into the scaling law. However, from Figure 4 it seems that the loss is relative robust to the choice of this ratio, which suggests to not account for this. - I suggest to include a comment on other concurrent work, that suggests to fix this ratio (See Busbridge, et al., Distillation Scaling Laws, 2025 (http://arxiv.org/abs/2502.08606) - Their methodology for selecting inference efficient language models seems to be motivated by the problem that predicting the downstream performance from language modeling loss is challenging. - However, instead of resolving this issue, the proposed method adds another selection step, where another metric (namely the latency) is used to select model candidates. - In my view the contribution of this proposed methodology is limited and not novel, as the "novel" method is to measure latency and select the fastest candidates, which is the setup of classical hyperparameter tuning. - No other architectural changes than changing the aspect ratio between embedding dim and the number of layers are investigated. - For a paper that claims to scale “inference-efficient language models”, architectural changes such as GQA or MQA cannot be neglected. - At least there should be an attempt to investigate how these methods could be accounted for in the law. - Or it must be demonstrated that these methods have no impact on the scaling law. - Just from Equation (4) it seems that only the aspect ratio is accounted for, but the aspect ratio alone does not affect the latency. Therefore, it is unclear how the constraint, that is formulated in equation (3) can be fulfilled. - Their Morph-1B model is “just” a wider and shallower attention Transformer model. - It is not clear how the scaling law “guides” this model design (e.g. in number of parameters or number of training tokens). - It could have just been selected from the observation that wider and shallower models are faster on inference. Other Comments Or Suggestions: - L. 292, Table 2: Caption: Typo "of of". Questions For Authors: - Why did the authors not include "standard" scaling law plots in there paper, where the loss is plotted over number of parameters or number of FLOPs? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Claims** **C1:** This paper only looks... **Answer:** From Figure 3 in [2], prior work has also observed that models with smaller loss do not mean better performance over downstream tasks. See more results here: https://anonymous.4open.science/r/ICML25-Rebuttal-3B34 **C2:** Just the reference to.... **Answer:** Thanks for your question about the inference system and how it affects latency. It is true that changing the inference system can change the latencies and we show results from vLLM in Figure 13, Figure 14. However, even then we see some interesting behavior where latencies for say the MiniCPM-1B is 30-40% higher than Qwen2.5-1.5B. We will clarify this point and include an explanation in the final version. **Experimental Design:** **E1:** Details on the latency... **Answer:** We used two inference frameworks in our work: hugging face (Figures 1, 2, 3, 10, 11, and 12) and vLLM (Figures 13 and 14). When using Huggingface we directly called the generate function. When using vLLM, we used torch.compile and torch CUDA graphs. **E2:** Details on the FLOP calculation... **Answer:** In this paper, we do not calculate FLOPs; instead, we directly measure the inference latency of models on GPUs. While FLOPs are a valuable metric, they do not accurately represent the real-world latency of a model [3]. Therefore, we focus on measuring the end-to-end inference latency. **Q1:** Only very small models are investigated.... **A1:** Due to limited computational resources, we cannot afford to train larger models, such as a 7B model with over 140B tokens. We plan to expand our experiments to include larger models (>7B parameters) when we obtain more computational resources. **Q2:** The paper proposes to include the aspect... **A2:** We note that Distillation Scaling Laws was submitted to arXiv on Feb 12th, 12 days after the ICML deadline. We will discuss it in the final version. As for the aspect ratio, Figure 4 illustrates that with a fixed number of parameters, the loss can increase by up to 5% by varying the aspect ratio. In general, models with higher losses are more likely to exhibit poorer performance on downstream tasks, particularly when the loss increases by 5% or more. In addition, as shown in Figure 3, the model shape plays a crucial role in inference latency. Therefore, we believe we should account for this in the scaling law to co-optimize model loss and inference latency. **Q3:** Their methodology for selecting... **A3:** Thank you for the question. We would first like to clarify our pipeline here. In our pipeline, we incorporate inference latency into scaling laws, enabling them to accurately predict the loss ranking of candidate models in comparison to the Chinchilla law. However, due to the gap between loss and accuracy, as illustrated in Figure 5, we believe the effective solution is to evaluate models based on their performance over downstream tasks. Consequently, we choose the top-k models predicted by our scaling laws, pre-train them, and then evaluate their performance on downstream tasks. We then select the best model to release based on inference latency and downstream task performance. We note that both the updated formulation of our scaling law and how to integrate the proposed scaling law into the model selection pipeline are contributions of our work. **C3 & Q4:** No other architectural changes... **A4:** We agree with your points. In Section 6, “Limitations and Future Work,” we regard architectural changes such as GQA or MQA as future work. **Q5:** Just from Equation (4)... **A5:** As shown in Figure 3, with a fixed number of parameters, the aspect ratio predominantly affects the latency. Therefore, we want to solve the following question in the paper: $\arg\min L(N, D, R) = (E + AN^{−α} + BD^{−β} ) · (1 + εR^{γ})$ s.t. $N \leq N_{C}$, $D \leq D_{C}$, $T_{inf} \leq T_{C}$ **Q6:** Their Morph-1B model... **A6:** Given that we have a 24-layer base model Morph-1B-v1, we first propose several candidate models with lower inference latency. The configurations we used are (2560, 16), (3072, 12), (3200, 10), (3328, 8), (4096, 6), and (4608, 4), where the first element is the hidden size and the second element is the number of layers. Next, we use the scaling law to predict the loss of each model. Since our scaling laws can predict the rank of model loss more accurately than Chinchilla's, we selected the top 2 candidates (3072, 12) and (2560, 16) to do pre-training. After we obtain the pre-trained models, we evaluate them under downstream tasks and release the best one. **Q7:** Why did the authors not include "standard" scaling law plots... **A7:** https://anonymous.4open.science/r/ICML25-Rebuttal-3B34 [1] Beyond chinchilla-optimal: Accounting for inference in language model scaling laws. arXiv 2023. [2] Language models scale reliably with over-training and on downstream tasks. arXiv 2024. [3] Run, don't walk: chasing higher FLOPS for faster neural networks. CVPR 2023. --- Rebuttal Comment 1.1: Comment: I thank the authors for their effort in this answer. I see now there are now flop controlled experiments (isoFLOP curves) but instead the token param ratio is varied {20,40,160} this does not require FLOP calculation. (Even though for the FLOP figures provided in the rebuttal, they must have been calculated somehow.) Note on the additional Figures: thanks the effort to create the additional figures. Would have been nice to mark the different models with different colors (e.g. different tokan param ratios or different model sizes in the loss-FLOP plot). In general my questions regarding Figure 4 remain. In the paper it is stated: >We plot the loss values against the aspect ratio in Figure 4. From the figure, we can see that the most suitable model shape adjustment is the inclusion of the term $(1 + \epsilon R^\gamma)$ Why does the term has this shape? I see that the dashed line (which is the proposed law) predicts the points reasonably well. How would the fit look without the term? As you point to Figure 3: It is not clear to me how Latency (s) is measured (same as for Figure 1?). Typically inference performance in terms of speed is measured in Time to first Token (s) and Generation Throughput (toks/s) (or per token-latency). E.g. this is a nice recent source on this: [https://jax-ml.github.io/scaling-book/inference/](https://jax-ml.github.io/scaling-book/inference/) Unfortunately, I am still not fully convinced so I tend to keep my score. --- Reply to Comment 1.1.1: Comment: We appreciate your response! Our answers are as follows: 1. We calculate FLOPs as 6ND, following the method outlined in [1], where N represents the number of parameters and D the number of tokens used in training. 2. We have updated the loss-FLOPs figures based on your recommendation of using different colors at https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/FLOPs_rebuttal/loss_vs_flops_rebuttal.pdf 3. As for the shape of the term, similar to prior work [2] on scaling laws, our work is guided by the trend of losses observed across various model variants. The scaling laws without our new term will be the same as the Chinchiall scaling laws [1]. In Figures 7-9, we compare our scaling laws with the Chinchilla scaling law (i.e., with our term vs. without the term). We observe that our scaling laws significantly reduce the prediction error. 4. In terms of latency measurements, for Figure 3, we use the same setting as Figure 1 to measure the latency of models. All evaluations were performed using the Hugging Face generate function on a single NVIDIA Ampere 40GB A100 GPU with batch size 1, input length 128, and output length 256. We will clarify this in the paper. 5. We have more results here: To measure throughput (tokens / s), we fix the number of input tokens as 128 and the number of output tokens as 256 in all throughput experiments. All evaluations were performed using the Hugging Face generate function on a single NVIDIA Ampere 40GB A100 GPU To begin with, we fix the number of parameters and vary the batch size. The results of the 1B, 3B, and 7B models are shown: https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure3_rebuttal/tput/tput_vs_batch_size_1B_variants.pdf, https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure3_rebuttal/tput/tput_vs_batch_size_3B_variants.pdf, https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure3_rebuttal/tput/tput_vs_batch_size_7B_variants.pdf. The legend displays the model's hidden size / n_layers ratio. **We find that models with wider dimensions achieve higher throughput than narrower models with the same parameter count.** Then, we fix the hidden size and vary the batch size. The results are shown here: https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure3_rebuttal/tput/tput_vs_batch_size_with_layers_hidden_size_4096.pdf. The legend shows the number of layers of the model. We observe that throughput decreases linearly with the number of layers when the hidden size remains constant. We also have similar observations in https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure3_rebuttal/tput/tput_vs_batch_size_with_hidden_size_layer_4&8.pdf. In the legend, "4+4096" represents a model with 4 layers and a hidden size of 4096, while "8+8192" denotes a model with 8 layers and a hidden size of 8192. Finally, we fix the number of layers to 8 and vary the batch size. The results are shown here: https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure3_rebuttal/tput/tput_vs_batch_size_with_hidden_size_layer_8.pdf. We've noticed that models with a consistent number of layers but a smaller hidden dimension consistently achieve higher throughput. To measure Time To First Token (TTFT), we fix the number of input tokens as 128 and the number of output tokens as 1 in all TTFT experiments. All evaluations were performed using the Hugging Face generate function on a single NVIDIA Ampere 40GB A100 GPU. We first fix the number of parameters and vary the batch size. The results of the 1B, 3B, and 7B models are shown: https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure3_rebuttal/ttft/ttft_vs_batch_size_1B_variants.pdf, https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure3_rebuttal/ttft/ttft_vs_batch_size_3B_variants.pdf, https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure3_rebuttal/ttft/ttft_vs_batch_size_7B_variants.pdf. The legend indicates the ratio of hidden size to n_layers in the model. **We observe that models with a wider configuration exhibit lower TTFT compared to narrower models, given the same number of parameters.** Additionally, we illustrate the correlation between TTFT, layers, and hidden size in the PDF files provided, following a similar experimental setup as our throughput tests. https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure3_rebuttal/ttft/ttft_vs_batch_size_with_hidden_size_layer_4&8.pdf, https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure3_rebuttal/ttft/ttft_vs_batch_size_with_hidden_size_layer_8.pdf, and https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure3_rebuttal/ttft/ttft_vs_batch_size_with_layers_hidden_size_4096.pdf. [1] Hoffmann, Jordan, et al. "Training compute-optimal large language models." arXiv preprint arXiv:2203.15556 (2022). [2] Gadre, Samir Yitzhak, et al. "Language models scale reliably with over-training and on downstream tasks." arXiv preprint arXiv:2403.08540 (2024).
Summary: The paper observes that architecture modifications significantly affect inference latency whilst holding total model size fixed. The paper primarily uses model aspect ratio $r=d_{model}/n_{layers}$ as the parameterization of architecture. The paper then introduces an inference-efficient scaling law, which is sensitive to $r$ and enables constraint optimization subject to a size, data and latency constraint. The resulting process enables systematic improvement of the efficiency-accuracy pareto frontier. ## Update after rebuttal I am maintaining my score of 5: Strong Accept. I can see this significantly deviates from some reviews presented by other reviewers. The reasons I think this paper is well worth accepting are: 1. The empirical procedure is sound 2. The results are of practical value, although they are i) hardware and ii) architecture/task-constrained The limitation of point 2i) is necessary in order to show a practical use case. A theoretical estimate of latency could also work here, but in general it's not clear how to map this estimate to scenarios of interest. The task-constrained limitation is a consequence of essentially all empirical scaling law studies, and the authors should not be penalized for this. The outstanding issue on the paper raised by another reviewer is why the specific form of the scaling law works from a first principles perspective. It would be great to have clarification around it, however, for practical utility all that is required for empirical studies is correct boundary conditions, asymptotic behavior, and good empirical fit, each of which the results meet. Claims And Evidence: All primary claims in the paper are supported with clear, convincing explanations and evidence: 1. Aspect ratio $r$ influences inference latency 2. The inference-efficient scaling law (Equation 4) that incorporates aspect ratio is a good predictor of model cross-entropy 3. Following their constrained optimization procedure pushes the inference-accuracy pareto frontier. There is a secondary claim which is supported by evidence in the paper, but I feel more further evaluation is required: "The inference-efficient scaling law is more robust than the Chinchilla scaling law" (see e.g. line 358). First, "robust" is an overloaded term. It's true that the inference-efficient scaling law fits better than the Chinchilla law on the data provided (e.g. Fig 8c and 9c). However, it's unclear to me if this means the law is robust, which usually means with respect to some perturbation? This claim needs to be quantified/made explicit, e.g. using a bootstrap approach. If instead the paper means to convey the law is more accurate, the data from the Chinchilla study is available, see [1]. As a test of the inference efficient scaling law, it would be useful to understand if it is a better fit on the Chinchilla data than the Chinchilla law (the aspect ratios of the models in Chinchilla are provided in the Chinchilla Appendix [2]). Such an observation/confirmation would be extremely useful to the community, beyond the context of targeting inference latency. [1] Chinchilla Scaling: A replication attempt https://arxiv.org/abs/2404.10102 [2] Training Compute-Optimal Large Language Models https://arxiv.org/abs/2203.15556 Methods And Evaluation Criteria: Yes, the datasets and benchmarks chosen are all sensible and relevant. Theoretical Claims: No theoretical claims are made in the work. Experimental Designs Or Analyses: I checked the soundness and validity of all experimental designs. The paper makes sensible choices throughout. However I will make comments about two aspects of the investigation. ### Excluding over-training data In Figure 8 the paper shows the scaling law fits and extrapolations for training with only Chinchilla-optimal models $D=20N$. From the form of the scaling law $$ L(N,D)=E+AN^{-\alpha}+BD^{-\beta} $$ when we use Chinchilla optimal models, then $L(N)=E+AN^{-\alpha}+B(20N)^{-\beta}=E+AN^{-\alpha}+B^\prime(N)^{-\beta}$ Further, as you then set $\alpha=\beta$ (line 248), the equation becomes $L(N)=E+(A+B^\prime)N^{-\alpha}=E+A^{\prime\prime}N^{-\alpha}$. Even though $A^{\prime\prime}$ can be identified through your fitting procedure, $A$ and $B^\prime$ cannot (infinitely many solutions of numbers summing to $A^{\prime\prime}$), and so the independent effects of $N$ and $D$ are lost. I.e. the data does not enable identifying of scaling coefficients (which is what you are finding in practice). To circumvent this issue cleanly with a minimal compute budget, IsoFLOP protocols are often used (e.g. [2]). [2] Training Compute-Optimal Large Language Models https://arxiv.org/abs/2203.15556 ### Choice of matched coefficients In line 249 the authors set $\alpha=\beta=\gamma$. There is prior evidence $\alpha=\beta$, e.g. [2]. It is unclear to be why the choice to also set $\gamma$ to these values was made. [2] Training Compute-Optimal Large Language Models https://arxiv.org/abs/2203.15556 Supplementary Material: I reviewed all of the supplementary material. Relation To Broader Scientific Literature: The work generalizes the constrained optimization training procedures of [2, 3] from compute optimal, to lifecycle compute optimal, to latency optimal within practical constraints. The work is complementary to other investigations in architecture modification that can improve latency at a given model capability, for example Sparse MoEs [4, 5]. [2] Training Compute-Optimal Large Language Models https://arxiv.org/abs/2203.15556 [3] Beyond Chinchilla-Optimal: Accounting for Inference in Language Model Scaling Laws https://arxiv.org/abs/2401.00448 [4] Scaling Laws for Fine-Grained Mixture of Experts https://arxiv.org/abs/2402.07871 [5] Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models https://arxiv.org/abs/2501.12370 Essential References Not Discussed: No essential references are missing. Other Strengths And Weaknesses: The paper is extremely easy to read, is well-structured, and has well-made figures. Other Comments Or Suggestions: 1. In line 437, you say that your approach "avoids estimating tokens generated". I would rephrase this, as it implies your approach is circumventing a problem in earlier work. To me it seems more like there is only a philosophical difference. The current work is concerned with model latency, which is an instantaneous measure, and useful to control. Prior work [3] is concerned with the total carbon footprint of a model during its lifetime. These constraints are both important. 2. I would not call Equation 4 an "inference-efficient" scaling law. To me, it is a scaling law that can estimate cross-entropy, and can take into account the model aspect ratio. In this sense, I would say Equation 4 is more like an "aspect-ratio sensitive/aware" law. Equation 4 can be used to then induce inference efficiencies, but that follows from downstream analysis, and not from Equation 4 itself. 3. Typo: there is a power of 2 missing in the MSE definition on line 271. 4. Provide your coefficient fits in the paper, preferably with bootstrap CIs. [3] Beyond Chinchilla-Optimal: Accounting for Inference in Language Model Scaling Laws https://arxiv.org/abs/2401.00448 Questions For Authors: 1. What happens if you allow $\gamma\neq \alpha$? 2. Is there a proxy/notion of latency we can use which is hardware agnostic? 3. Do you observe any systematic relationship between model aspect ratio and downstream evaluation? (e.g. in [6] it is shown that denser models perform better on reasoning tasks at a fixed number of parameters that sparse MoEs). [6] Mixture of Parrots: Experts improve memorization more than reasoning https://arxiv.org/abs/2410.19034 Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: **Q1:** In line 437, you say that your approach "avoids estimating tokens generated". I would rephrase this, as it implies your approach is circumventing a problem in earlier work. To me it seems more like there is only a philosophical difference. The current work is concerned with model latency, which is an instantaneous measure, and useful to control. Prior work [3] is concerned with the total carbon footprint of a model during its lifetime. These constraints are both important. **A1:** Thanks for pointing it out! We agree that the total carbon footprint of a model during its lifetime is also important. We will rephrase the sentence in the final version. **Q2:** I would not call Equation 4 an "inference-efficient" scaling law. To me, it is a scaling law that can estimate cross-entropy and can take into account the model aspect ratio. In this sense, I would say Equation 4 is more like an "aspect-ratio sensitive/aware" law. Equation 4 can be used to then induce inference efficiencies, but that follows from downstream analysis and not from Equation 4 itself. **A2:** Thank you for the feedback. We will improve the name in the final version. **Q3:** Typo: there is a power of 2 missing in the MSE definition on line 271. **A3:** We will fix this typo in the final version. **Q4:** Provide your coefficient fits in the paper, preferably with bootstrap CIs. **A4:** For Figure 7, https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure_7_coefficient.png; For Figure 8, https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure_8_coefficient.png; For Figure 9, https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/Figure_9_coefficient.png. **Q5:** What happens if you allow $\gamma \neq \alpha$ **A5:** Thank you for your suggestion! Given that \gamma is the exponent of R, we often face overflow issues when utilizing scipy.optimize.curve_fit. To circumvent this problem, we can use the bounds parameter within the function. When removing the assumption that $\gamma = \alpha$ and applying necessary bounds to avoid overflow, we found that our scaling laws become more accurate compared to existing ones. In Figure 7, the R² will become 0.9985 (from 0.9982), and the MSE will become 0.0005 (from 0.0006). We will update the results in the final version. **Q6:** Is there a proxy/notion of latency we can use which is hardware agnostic? **A6:** In this study, we advocate for designing a wider and shallower Transformer model. This approach is based on the understanding that inference in models occurs sequentially, layer by layer, while GPUs can process each layer concurrently. Therefore, a wider and shallower transformer model has smaller inference latency is a hardware-agnostic notion, though the degree would differ based on the specific hardware. **Q7:** Do you observe any systematic relationship between model aspect ratio and downstream evaluation? (e.g. in [6] it is shown that denser models perform better on reasoning tasks at a fixed number of parameters that sparse MoEs). **A7:** Thanks for your question! However, we do not observe any systematic relationship between the model aspect ratio and downstream evaluation. We will include this in future work. [3] Beyond Chinchilla-Optimal: Accounting for Inference in Language Model Scaling Laws https://arxiv.org/abs/2401.00448 [6] Mixture of Parrots: Experts improve memorization more than reasoning https://arxiv.org/abs/2410.19034 --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my questions, as well as those of other reviewers. I have also considered the points raised by other reviewers. The only problematic one for me is raised by kVaw regarding why exactly the aspect ratio should be represented as $(1+\epsilon R^\gamma)$ and do agree with kVaw that a reader would benefit from following the thought process leading to this conclusion. Overall this does not change my perspective on whether the contribution is: i) Correct (it is, to the best of my assessment), or ii) Useful. I do agree with reviewer B9RC that our confidence in extrapolation is partially limited by the range of model sizes in the "test set" for this experiment (which are not too large). Without significant compute resources, this is true of all scaling studies. Until a counter-example to the trend given in the current paper is shown, the findings of this work are useful and generally applicable. Consequently, I maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for your encouraging feedback. Regarding the $(1 + \epsilon R^{\gamma})$ term, our approach aligns with previous studies on scaling laws, as referenced in [1]. We base our work on the trends in losses observed across different model variants. The proposed scaling law is linked to model shape, particularly through the relationship with $R$. Therefore, $R$ would be one part of scaling laws. Furthermore, we introduce two learnable parameters, $\epsilon$ and $\gamma$, to refine the smoothness of the scaling law. If you have follow-up questions, please let us know! Thanks! [1] Gadre, Samir Yitzhak, et al. "Language models scale reliably with over-training and on downstream tasks." arXiv preprint arXiv:2403.08540 (2024).
Summary: The paper presents revised inference-time scaling laws based on model architecture choices, relying on the observation that models of the same size but different architecture choices can have up to a 3.5 times difference in inference latency. Using that, they train models of varying sizes up to 1B parameters, improving inference latency by a factor of 1.8 while keeping the same accuracy on downstream tasks as that of open-source models of the same size. Claims And Evidence: Claims are supported by extensive experiments and ablations. Methods And Evaluation Criteria: Authors use standard evaluations, benchmark datasets and baselines which are sound. Theoretical Claims: Nothing to discuss in particular. Experimental Designs Or Analyses: Experimental settings are methodologically sound, experiments and ablations are informative and extensive. See questions for aspects to clarify. Supplementary Material: Yes, the appendices thoroughly cover experimental details overall. Relation To Broader Scientific Literature: The paper is relevant as a training recipe for future generations of LLMs and shows a novel finding relating inference latency with architecture design choices, which motivates further work in this direction. Essential References Not Discussed: None to be discussed to the best of my knowledge. Other Strengths And Weaknesses: None, all submitted either as part of the paper evaluation or as questions. Other Comments Or Suggestions: Nothing particularly important to mention. Questions For Authors: - Table 1 seems to only rely on 6 data points, how statistically valid is extrapolating to larger sizes in this case? - The Spearman correlation analysis (Figure 7c) strongly supports the scaling law’s ranking capability, could you explain the negative correlation for Chinchilla’s law? - The modified scaling law in Eq. 4 introduces a $(1 + \epsilon R^\gamma)$ term to the Chinchilla loss function, while it works empirically, is there any theoretical rationale behind that? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1:** Table 1 seems to only rely on 6 data points, how statistically valid is extrapolating to larger sizes in this case? **A1:** As detailed in Table 4 of the Appendix, each model size includes several variants. We use 27 data points to fit the scaling laws in Figure 7. **Q2:** The Spearman correlation analysis (Figure 7c) strongly supports the scaling law’s ranking capability. Could you explain the negative correlation for Chinchilla’s law? **A2:** The Spearman correlation ranges from -1 to 1. In Figure 7, the actual rank of 4 models are [1, 2, 4, 3], however, the predicted rank of Chinchilla’s law is [4, 2, 3, 1]. Therefore, the value of the Spearman correlation analysis is negative. **Q3:** The modified scaling law in Eq. 4 introduces a $(1 + \epsilon R^\gamma)$ term to the Chinchilla loss function. While it works empirically, is there any theoretical rationale behind that? **A3:** Similar to prior work [1] on scaling laws, our work is guided by the trend of losses observed across various model variants. [1] Gadre, Samir Yitzhak, et al. "Language models scale reliably with over-training and on downstream tasks." arXiv preprint arXiv:2403.08540 (2024).
Summary: Traditional scaling laws (like Chinchilla) do not account for model architectures in their modeling of the loss. This paper first highlights that the model architecture (like hidden dim, #layers) affects the downstream loss as well as the latency of the models (also studied in multiple previous works). They propose incorporating model shape (width to depth ratio) in the chinchilla scaling law. They use this scaling law to extract a couple of choices for aspect ratios that might give a good enough (top-k) loss. The inference latency of these choices are then evaluated empirically, and combined with the loss scores/ranks to guide the final choice of model to train. Claims And Evidence: Upto some extent (See weakness section for details) Methods And Evaluation Criteria: Yes Theoretical Claims: NA Experimental Designs Or Analyses: Upto some extent (See weakness section for details) Supplementary Material: Wasn't required Relation To Broader Scientific Literature: Helps guide choice of model architecture to train for practioners. Essential References Not Discussed: Some very relevant missing references: https://arxiv.org/pdf/2109.10686 (SCALE EFFICIENTLY: INSIGHTS FROM PRE-TRAINING AND FINE-TUNING TRANSFORMERS) that talks about impact of model shape on downstream loss https://arxiv.org/pdf/2305.13035 (effect of model shape in vision tasks) Other Strengths And Weaknesses: ### Strength: - I really liked authors using their scaling law observations to come up with the most efficient (in terms of latency) 1B model. ### Weaknesses: - The authors emphasize the importance of estimating the downstream inference latency of models. However, they still propose to estimate model latency using an untrained model. In their final model selection algorithm, the latency scores and the predicted loss are combined to select k candidates for training. It would have been much more interesting if the scaling law could be formulated to directly predict a combined metric of inference latency and loss, rather than treating them separately. Can we not estimate inference latency based on d/n? Wondering what were the reasons behind author's choice of the specific modeling they present in the paper. - Moreover, inference latency depends on the number of tokens the model generates to answer a question at inference. More accurate models might be able to answer the same question in fewer tokens, through higher-level reasoning or by reducing unnecessary attempts. Given these factors, predicting inference latency solely based on architecture (using an untrained model), as the authors propose, seems problematic. - I strongly encourage the authors to precisely define how they are estimating downstream latency. - The authors incorporate the d/n term in their modified scaling law. While Figure 7 shows that this scaling law better estimates loss for the models they trained, my main concern is: how much does the d/n term vary in real-world models? In Figure 4, they show loss variation as d/n ranges from \(2^2\) to \(2^9\), but does it vary this much in actual models that practitioners train (e.g., Gemma, LLaMA, Qwen series)? I encourage the authors to demonstrate how much the d/n term (the ratio R) varies across existing open-source models and how accurately one could use it to estimate their loss. They could, for instance, use a best-fit approximation for a held-out dataset to estimate the loss (perplexity) of these models or, at the very least, highlight the actual range of variation of R across these models. - This concern is further supported by Table 2. Among the top three Morph-1B model variants trained by the authors, while the aspect ratio varies significantly, the downstream performance remains nearly the same. This suggests that for any **reasonable range of choice** of aspect ratio, d/n may not be that critical in modeling loss through scaling laws. - Finally, the prediction accuracy of the trained scaling laws has only been evaluated on models 3× larger than those in the training set. In general, one would want to test at least an order of magnitude larger models to validate the law’s extrapolation capabilities. However, I understand this might stem out of computational constraints, and I will not weigh this limitation heavily in my final assessment. ### Missing References: The paper overlooks some relevant prior work that studies the impact of model shape on scaling laws: - [Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers (Tay et al., 2022)](https://arxiv.org/pdf/2109.10686) – Discusses how model shape (depth vs. width) impacts downstream performance and questions whether existing scaling laws capture these effects. - [Effect of Model Shape in Vision Tasks (Alabdulmohsin et al., 2023)](https://arxiv.org/pdf/2305.13035#:~:text=Unfortunately%2C%20in%20both%20,up) – Explores how model width and depth interact in Vision Transformers (ViTs) and whether similar trends hold across domains. Other Comments Or Suggestions: See the weakness section Questions For Authors: See the weakness section Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: **Q1:** The authors emphasize the importance of estimating the downstream inference latency of models... **Q2:** Moreover, inference latency depends on the number of tokens the model generates to answer a question at inference... **Q3:** I strongly encourage the authors to define precisely how they are estimating downstream latency. **A1 & A2, & A3:** We believe these three questions are related and address them together. Firstly, we agree with the reviewer that the end-to-end latency for downstream tasks depends on the number of tokens generated to answer the questions. End-to-end latency can be seen as the time per output token times the number of generated tokens. However, predicting the number of tokens generated to answer downstream tasks is challenging since we do not target any specific downstream task during pre-training. Therefore, we follow the approach used in previous work [1, 2]; we measure the latency of models by fixing the number of input and output tokens and focus on minimizing the time per output token. For example, we fix the input length as 128 and the output length as 256 in Figure 1. As a result, we don't need to model the latency in the scaling law and can instead measure a model's time per output token using an untrained model. Thus, we use empirically measured model latencies in our proposed model selection pipeline. We would also like to stress that reducing response length while maintaining answer quality is of independent research interest and orthogonal to our work. In our paper, we follow the previous work [1, 2] and measure the latency of models while fixing the number of input and output tokens. We will make it clearer in the main text in the final version. **Q4:** The authors incorporate the d/n term in their modified scaling law.... **A4:** Here, we show the d/n term for 15 open-sourced models: https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/open-sourced-models-d_n.png In Figure 4, we illustrate the variation in loss as d/n ranges from (2^4) to (2^9) rather than from (2^2) to (2^9). From the above table, we believe that our ranges are reasonable. **Q5:** This concern is further supported by Table 2. Among the top three Morph-1B model variants trained by the authors, while the aspect ratio varies significantly.... **A5:** Our experimental results indicate that an aspect ratio between 64 and 256 is a practical choice for 1B models. However, as illustrated in Figure 5, the performance of two models on downstream tasks can differ even if their losses are similar. For instance, while the losses of two 164M model variants are recorded as 3.32 and 3.35, their accuracies on the BoolQ dataset differ significantly, with scores of 0.5379 and 0.5734, respectively. Therefore, while the d/n ratio may have a limited impact on model loss under a reasonable range, the performance on downstream tasks varies. **Q6:** Finally, the prediction accuracy of the trained scaling laws has only been evaluated on models 3× larger than those in the training set.... **A6:** Thanks for your suggestions! We agree with your points. We plan to expand our experiments to include larger models when we obtain more computational resources. Missing References: The paper overlooks some relevant prior work that studies the impact of model shape on scaling laws: Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers (Tay et al., 2022) – Discusses how model shape (depth vs. width) impacts downstream performance and questions whether existing scaling laws capture these effects. Effect of Model Shape in Vision Tasks (Alabdulmohsin et al., 2023) – Explores how model width and depth interact in Vision Transformers (ViTs) and whether similar trends hold across domains. Thank you for the suggestions. The 'Scale Efficiently' paper explores the influence of model shape on downstream performance, but it ignores the effect on inference efficiency. Regarding the second paper, 'Effect of Model Shape in Vision Tasks,' it focuses on the impact of model shape on Vision Transformers (ViTs), whereas our study examines its impact on Large Language Models (LLMs). We will include these papers in the related work section. [1] Zhong, Yinmin, et al. "{DistServe}: Disaggregating prefill and decoding for goodput-optimized large language model serving." OSDI 24. [2] Yu, Gyeong-In, et al. "Orca: A distributed serving system for {Transformer-Based} generative models." OSDI 22. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. > Range of d/n ratio for open-source models From the d/n ratios of open-source models linked in the response, it is evident that d/n typically falls within the range of 2⁶ to 2⁷. In contrast, the authors argue for searching across a much wider range, which does not appear practical. For instance, the authors' suggested optimal d/n value is far outside the commonly observed range. > BoolQ dataset results The reported scores on the BoolQ dataset (0.5379 vs. 0.5734) are quite low for a binary classification task. A difference between 53% and 57% accuracy is not sufficient to draw strong conclusions. Therefore, my concern remains that the choice of d/n may not have a meaningful impact on downstream performance, especially when compared to a natural baseline guess informed by existing models. I once again thank the authors for their response. However, I believe the manuscript still requires substantial improvement in justifying the search for optimal d/n and showing that it offers real benefits over conventional heuristics. My concerns remain, and I am maintaining my current rating. --- Reply to Comment 1.1.1: Comment: Thanks for your response! Our answers are as follows: > Range of d/n ratio for open-source models First, we present additional results for the d/n term from open-sourced models here: https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/open-sourced-models-d_n-rebuttal.png. From https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/open-sourced-models-d_n.png and https://anonymous.4open.science/r/ICML25-Rebuttal-3B34/open-sourced-models-d_n-rebuttal.png, our observation shows that the minimum d_n value among open-sourced models is 29.54 (MiniCPM-1B [1]), and the maximum is 199.11 (Gemma-3-27B [2]), which supports the d/n range we discuss in our paper as being reasonable. Furthermore, with 21 unique d_n values identified, a comprehensive search range is required. Since pre-training requires significant time and resources, developing effective scaling laws based on model shape could help inform model design decisions and reduce training costs. > BoolQ dataset results First, we do not believe that the reported scores on the BoolQ dataset are quite low for two 164M model variants compared with the results reported in https://paperswithcode.com/sota/question-answering-on-boolq. Moreover, we present more results of the 1B model here: while the losses of Morph-1B-v1 and Morph-1B-v2 are recorded as 2.896 and 2.909, their accuracies on the BoolQ dataset differ significantly, with scores of 0.5758 and 0.6049, respectively. Given OPT-IML 1.3B (zero-shot)'s performance of 61.5% on BoolQ, as documented at https://paperswithcode.com/sota/question-answering-on-boolq, we believe our results are strong for our class of models even with much fewer training resources and demonstrate meaningful differentiation in downstream performance. [1] Hu, Shengding, et al. "Minicpm: Unveiling the potential of small language models with scalable training strategies." arXiv preprint arXiv:2404.06395 (2024). [2] Team, Gemma, et al. "Gemma 3 Technical Report." arXiv preprint arXiv:2503.19786 (2025).
null
null
null
null
null
null
Mixed-curvature decision trees and random forests
Accept (poster)
Summary: The paper introduces mixed-curvature decision trees (DTs) and random forests (RFs), which can be used to analyse data living on product manifolds: combinations of Euclidean, hyperspherical and hyperbolic spaces, allowing for heterogeneous curvature. DTs are reformulated using angles to respect the manifold geometry, ensuring that splits are geodesically convex, maximum margin and composable. The methods are evaluated on classification, regression and link-prediction tasks, where they show strong empirical performance. They are also used to analyse graph embeddings and latent spaces from variational autoencoders. **Update after rebuttal**. The authors clarified that the work is downstream of signature selection. I broadly buy that generalising DTs and RFs to mixed curvature product manifolds might be novel and interesting. I don't think that statistically significant experimental gains across the board are crucial for acceptance, so will stick with my current score. I defer to other reviewers on whether there are problems with the theoretical analysis. Claims And Evidence: The authors convincingly argue that DTs and RFs can be generalised to all constant-curvature manifolds by representing data and splits as angles in two-dimensional subspaces. They show competitive performance on a wide range of benchmarks, including classification, regression and link prediction. The datasets include synthetic data, graph-embeddings, mixed-curvature VAE latent space, and empirical datasets. Their methods offer a nice interpolation between linear classifiers (ineffective but interpretable) and neural networks (effective but uninterpretable). Methods And Evaluation Criteria: The methods and evaluation criteria make sense; see above. Theoretical Claims: The paper provides a proof for the theoretical claim that Euclidean DTs using the angular reformulation are equivalent to classical decision trees that perform thresholding in the basis dimensions (see App C). This demonstrates that, while the presentation is unconventional, it is mathematically equivalent to traditional methods. I didn’t check the maths in detail, but this seems reasonable. Experimental Designs Or Analyses: I think the experimental designs are reasonable; see above. Supplementary Material: Appendix A gives details for sampling mixtures of Gaussians to generate classification and regression datasets. Appendix B gives pseudocode for the product space decision tree. Appendix C gives a proof of equivalence for the Euclidean case. Appendix D summarises the angular midpoint formulae for different manifolds. Appendix E-I give exhaustive experimental details and further results. These all seem reasonable. Relation To Broader Scientific Literature: The paper builds on previous work on machine learning in product manifolds, including Tabaghi et al. (2021, 2024) and Cho et al (2023). This includes linear classifiers, perceptrons, SVMs, PCAs and Transformers. It generalises work by Doorenbos (2023) and Chlenski (2024) on RFs in hyperbolic space, by permitting mixed curvature. Product manifolds have also been used to embed graphs (Wang 2021) and variously in biology. Essential References Not Discussed: The references seem reasonable. Other Strengths And Weaknesses: **Strengths**: - Well-written, clear, convincing text - The core idea – generalising DTs and RFs to mixed curvature product manifolds – seems novel and interesting. Whilst I wouldn’t expect this to ultimately compete with SOTA deep learning, I buy that this might be an effective, interpretable tool in certain simple cases. - Benchmarking on classification, regression and link prediction tasks on synthetic data, graph embeddings, mixed-curvature VAE latent spaces and empirical data is pretty exhaustive **Weaknesses**: - As the authors acknowledge, they view their work as downstream of signature selection and embedding generation. This avoids the fact that good product manifold embeddings might be difficult to find or expensive, and might limit the practicality of their algorithms. Other Comments Or Suggestions: I understand that space is limited, but it feels a shame to have Alg 1 in the appendix. I wonder if a shorter version could be included in the main body. Questions For Authors: - Can you comment more on signature selection? How difficult is this in practice? You acknowledge that ‘product manifolds are not able to represent all patterns in data’, referencing Borde & Kratsios (2023). What types of data patterns or structures are difficult for product manifolds to capture, and how might these limitations be addressed in future work? - You mention ‘tradeoffs between DTs and RFs and other high-performing methods, especially graph neural networks when topologies are known’. In what specific scenarios do GNNs outperform your method? What characteristics of the data or task contribute to this difference in performance? (I appreciate that this is a big question...) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their careful attention, and are grateful for their favorable assessment of our “**well-written**, clear, convincing text”, our core idea being “**novel and interesting**,” and our method’s potential to be an “**effective, interpretable** tool.” **Relationship to signature selection and embedding generation.** We view signature selection and embedding generation as a complementary avenue of work and have tried to keep our discussion focused on classification and regression with existing embeddings. Nonetheless, as these are all parts of one process, we have tried to model simple but realistic end-to-end product manifold learning pipelines, starting e.g. from pairwise distances (for graph benchmarks) or feature tables. Many other papers emphasize the first half of the pipeline—signature selection and embedding generation—and deemphasize approaches to classification and regression. Our work helps complete the picture by expanding and improving ways to use these embeddings once they are generated. We believe this division of focus allows for directed progress in each area and drives the overall field of non-Euclidean machine learning forward. We are optimistic that advances in either of these subproblems can be combined, e.g. by using mixed-curvature decision trees to analyze state-of-the-art product embeddings. **Algorithm 1 in main body.** We agree that it would be better to include Algorithm 1 in the main body of the paper. If accepted, we will use the extra page in the camera-ready to include this. **Signature selection.** We selected signatures as follows: - For Gaussians on single manifolds, we try a range of curvatures; on product manifolds, we try signatures with curvatures in $\\{-1, 0, 1\\}$ - For graphs, we grid-search over signatures and use the one with the lowest embedding distortion - For VAEs, we follow \[1\] - For empirical datasets, the signature is determined by the specific problem. Selecting a signature is combinatorially difficult because the number of components, and the dimensionality of each component, must be selected. Common heuristics for searching over signatures include: * The curvature parameter can be learned smoothly in the range $\[-\\infty, \\infty\]$ by using stenographically-projected manifolds, as in mixed-curvature VAE * Signatures can be built up component by component according to a greedy algorithm, as in \[4\] * Heuristic-guided Bayesian optimization \[3\] is a more principled and sophisticated approach to this * In general, embedding is reasonably fast; training VAEs is slower Regarding our claim that “product manifolds are not able to represent all patterns in data,” the authors of \[1\] make a *theoretical* claim about product manifold limitations, but provide no empirical examples. It is reasonable to speculate that complex graphs with heterogeneous curvature might fall into this category, in which case there is a pragmatic question to ask about fidelity-complexity tradeoffs in the choice of representation geometries. **Comparison to GCNs.** One way of using a graph’s adjacency matrix is to generate embeddings via metric learning, as in \[2\]; another is simply to use GCNs using some set of features. These are not mutually exclusive: in fact, generating metric embeddings is roughly comparable to pretraining a GCN on link prediction. A promising future direction of work could involve using our decision trees as lightweight, easily trainable, and modestly interpretable probes on representations learned by pretraining GCNs. In general, our benchmarks did not generally favor GCNs even in situations where a known graph adjacency matrix was introduced; however, we appreciate that GCNs are in principle capable of substantially more complicated workflows than the ones that we benchmarked. We include the comment about tradeoffs to reflect the geometric deep learning community’s preference for GCNs on complex real-world tasks more than as a reflection of what we see in our own benchmarks: in particular, $\\kappa$-GCNs are never the best-performing method in our own Table 2\. **References.** \[1\] Borde and Kratsios (2023). Neural Snowflakes: Universal Latent Graph Inference via Trainable Latent Geometries. \[2\] Gu et al (2019). Learning Mixed-Curvature Representations in Product Spaces. \[3\] Borde et al (2023). Neural Latent Geometry Search: Product Manifold Inference via Gromov-Hausdorff-Informed Bayesian Optimization. \[4\] Tabaghi et al (2022). Linear Classifiers in Product Space Forms. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I'll stick with my current score.
Summary: This paper presents a novel extension of Decision Trees (DTs) and Random Forests (RFs) to product manifolds, which are Cartesian products of hyperbolic, hyperspherical, and Euclidean spaces. The authors introduce an angular reformulation of DTs that respects the geometry of product manifolds, resulting in geodesically convex, maximum-margin, and composable splits. The proposed method generalizes existing Euclidean and hyperbolic algorithms and introduces new hyperspherical DT algorithms. The researchers evaluate their approach on various tasks, including classification, regression, and link prediction, using synthetic data, graph embeddings, mixed-curvature variational autoencoder latent spaces, and empirical data. Their product RFs demonstrate strong performance, ranking first in 25 out of 57 benchmarks and placing in the top 2 for 46 out of 57 when compared to seven other classifiers. ## update after rebuttal Thank you to the authors for their responses. I believe it is essential to incorporate this discussion into the next revision of the paper. Claims And Evidence: Overall, the claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method is a tree-based approach for classification and regression tasks on the product space. While the rationale for choosing a tree-based method in product space is not well-motivated, the author does compare its performance with other non-tree-based methods. Various types of data are considered in the experiments, but the embedding space for the data is relatively low. Theoretical Claims: Sec 3, line 191-192, the claim "we observe that homogenous hyperplanes are geodesically convex in any constant-curvature manifold;" it is not proved or cited. Moreover, the concept "geodesically convex" is not defined and the importance of "geodesically convex" is not clear. Experimental Designs Or Analyses: I may have overlooked some details. I assume the numbers and charts indicate the mean scores and 95% confidence intervals as stated in lines 271-273. How many runs has the author conducted for each experiment? I see in line 926 that "we also employ the same random seed for all RF models." Is that same set of seeds applied to all experiments to compute the mean accuracy and its confidence interval? Additionally, the results from Product RF and Ambient RF are not statistically significantly different in most cases. The performance of DTs and RFs is sensitive to the choice of hyperparameters. It is not clear how the hyperparameters are chosen (section E.2). And as long as the method for choosing hyperparameters for different RFs is the same, it is not necessary to "set all DT and RF hyperparameters identically" (line 912). Supplementary Material: I have reviewed the whole Supplementary Material. Relation To Broader Scientific Literature: The proposed algorithms seem to be straightforward extensions of some exsiting work, combining the classification method on the product space [Tabaghi et al. Linear classifiers in product space forms.] with the HyperDT framework [Chlenski et al. Fast hyperboloid decision tree algorithms.]. The author's reformulation of the decision tree in Euclidean space (Section 3.1) seems unnecessary and introduces extra computations. Overall, the method appears somewhat incremental and does not offer significant novelty. Essential References Not Discussed: Please refer to any references I have mentioned in the answers to other questions. Note that some may have already been cited in the submission. Other Strengths And Weaknesses: Weakness: In this paper, the author use product space as representation space, but in real world, it is hard to choose the optimal signiture of the product space. Moreover authors restrict to work on product space with small (1, 2 and 4) dimension component manifold and the dimension of the product space is also relatively small (< 10), which further limit the paper. Other Comments Or Suggestions: In Eq.1, $x'$ as a tangent vector should lie in the space $R^d$. Questions For Authors: line 976, "Table 6: A summary of the neural models benchmarked in our work", where is the results from tangent/ambient MLP/GNN? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank the reviewer for their attention to our manuscript, and in particular for praising the strong performance of our method, the thoroughness of our benchmarks, and the “clear and convincing evidence” of our claims. **Motivation for a tree-based method and our contribution** Tree-based methods are extremely popular for Euclidean machine learning and have recently become popular in hyperbolic spaces as well \[1,2\]. We believe tree-based methods can be promising as lightweight, simple, easy to train, and interpretable probes on top of learned neural representations or standalone classifiers/regressors in their own right. Additionally, we address a gap in the literature by providing a detailed survey of product space-valued datasets and predictors: - We describe and benchmark an end-to-end product space ML pipeline - We describe many variants of existing models (e.g. kappa-GCN regression) for the first time We also note \[3\] is a well-regarded paper that generalizes Euclidean/hyperbolic GCNs to all constant curvatures and product manifolds. **Theoretical justification for splits** We thank the reviewer for pointing out that our paper’s exposition on geodesic convexity (GC) and why it matters for classifiers is too short. We initially believed that it would suffice to defer to other papers describing linear classifiers in non-Euclidean spaces (e.g. \[3\], \[4\], and especially \[1\], which explicitly argues on behalf of GC). We agree a brief but explicit discussion of the geometry of linear splits and its relationship to GC is needed. If accepted, we will use our extra space to expand on this between Sections 2.1 and 2.2 as follows: - $S \\subset \\mathcal{M}$ is **geodesically convex** if any $p, q \\in S$ implies the geodesic $\\gamma\_{p,q} \\subseteq S$. We take this for granted in Euclidean linear classifiers, as any separating hyperplane preserves this property. However, this is not automatically true in non-Euclidean spaces, which could be a source of misgeneralization for models \[1,4\]. - Building on \[5\] Ch. 3.1, if $\\mathcal{M}$ is partitioned by GC $G$ into $A$ and $B$, then $A$ and $B$ are also GC. By contradiction, if $\\gamma\_{u,v}$ (where $u, v \\in A$) were to cross into $B$, it must take a path $A \\to G \\to B \\to G \\to A$. This implies the existence of $p, q \\in G$ such that $\\gamma\_{p,q}$ enters $B$, implying $\\gamma\_{p,q} \\nsubseteq G$, violating the GC property of $G$. - \[6\] defines a linear split in product manifolds as $l\_\\mathbf{w}^\\mathcal{P} \= \\operatorname{sign}(\\langle w\_\\mathbb{E}, x\_\\mathbb{E} + \\alpha\_\\mathbb{S} \\sin^{-1}( \\langle w\_\\mathbb{S}, x\_\\mathbb{S} \\rangle ) + \\alpha\_\\mathbb{H} \\sinh^{-1} ( \\langle w\_\\mathbb{H}, x\_\\mathbb{H} \\rangle\_\\mathcal{L} + b),$ where $w\_\\mathcal{M}$ means the restriction of $w \\in \\mathcal{P}$ to some submanifold $\\mathcal{M}$. - Under our angular reformulation, $w$ is sparse in all but two dimensions lying in the same submanifold, with no bias term; thus, the split simplifies to $\\langle w\_\\mathcal{M}, x\_\\mathcal{M} \rangle \= x\_0\\cos(\\theta) \- x\_d\\sin(\\theta),$ where $\\theta$ is our splitting angle, and $d$ is the dimension along which we split. By restricting our attention to two dimensions within a single component manifold at a time, our angular reformulation approach bypasses almost all the complexity of performing geodesic splits in product manifolds. **Clarifying experimental details** - We show means with 95% confidence intervals in our tables - We use 10 trials for each experiment - Seeds are shared across RFs for splitting, but each trial uses a different seed. For instance, in figure 3: - Seeds 0-9 are used for K=-4, 10-19 for K=-2, 20-29 for K=-1, etc **Low dimensionality** We benchmark lower-dimensional embeddings, as these are more challenging and rely more heavily on the classifier’s ability to match the geometry of the space. However, we agree that it is worth evaluating higher-dimensional datasets; therefore, consistent with \[3\] we have benchmarked manifolds with 16 total dimensions at [https://postimg.cc/MvR5R8Pp](https://postimg.cc/MvR5R8Pp), which we will add to the Appendix. **Tangent vector space** - Thank you, we have fixed this. **Missing neural net benchmarks** Thank you for pointing this out, we have a full suite of benchmarks at [https://postimg.cc/gallery/ZbfzMzR](https://postimg.cc/gallery/ZbfzMzR). We will add the full benchmarks to the Appendix in the camera-ready. **References:** \[1\] Chlenski et al (2024). Fast Hyperboloid Decision Tree Algorithms. \[2\] Doorenbos et al (2024). Hyperbolic Random Forests. \[3\] Bachmann et al (2020). Constant Curvature Graph Convolutional Networks. \[4\] Cho et al (2018). Large-Margin Classification in Hyperbolic Space. \[5\] Urdiste (1994). Convex Functions and Optimization Methods on Riemannian Manifolds. \[6\] Tabaghi et al (2022). Linear Classifiers in Product Space Forms. --- Rebuttal Comment 1.1: Comment: Thank you for answering some of my questions. I feel many of my questions have not been addressed in the rebuttal: 1. The comparable performance between ambient RF and product RF in most cases. 2. The reformulation of the decision tree in Euclidean space (Section 3.1) seems unnecessary and introduces extra computations. 3. The choice of hyperparameters in the implementation of the proposed product RF and other tree-based methods. 4. Choosing the optimal signature of the product space for real data. Additionally, I still feel that 16 is a relatively low dimension, though this can be acknowledged as a limitation of the proposed method. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their follow-up questions and the opportunity to address outstanding concerns. In our initial rebuttal, we prioritized theoretical aspects and our main contributions. Our intent was to use the limited rebuttal space effectively (in accordance with ICML guidelines, which state that authors "are not expected to respond to every individual point in the reviews.") We sincerely appreciate the opportunity to elaborate on the remaining points and present additional benchmarks to further clarify our methodology and results. We hope our additional responses resolve any remaining concerns, and would be grateful if the reviewer would consider revising their evaluation accordingly. **Q1: More detailed comparison to ambient RFs:** To address concerns about our performance relative to ambient random forests, lack of statistical significance, and low dimensionality, we ran an additional suite of benchmarks in which we: * Directly compared ambient RFs and product RFs, omitting other models * Increased the number of trials to 30 per signature * Benchmarked signatures totaling 4, 16, 64, 256, and 1024 dimensions. * Benchmarked signatures: single hyperboloid (H), single hypersphere (S), product of two hyperboloids (HH), product of two hyperspheres (SS), and a hyperboloid-hypersphere product (HS) * Included Wilcoxon p-values on mean accuracies. To adjust for dimensionality, we made two further modifications relative to the Gaussian mixture benchmarks in the paper: we divide the variance by total dimensionality $d$ to prevent norms from blowing up when sampling the Gaussian mixture, and we set `n_features=d` for the product space manifolds because testing all $\\binom{d}{2}$ combinations is infeasible at higher dimensionalities. Product RFs outperformed ambient RFs in 16/20 tests, with 8 being statistically significant ($p \< .05)$. [Expanded synthetic benchmark table](https://postimg.cc/r0XnKkB5) **Q2: Extra computations for Euclidean submanifolds.** We agree that the computations described in the paper for Euclidean submanifolds are unnecessary. However, there are tradeoffs to removing them: our current split function (Eq. 16\) assumes an angular reformulation (Eq. 15), necessitating Eqs. 17–19 as a consequence. The alternative is to remove Eqs. 17–19 and reintroduce the conventional split (Eq. 14). We have tried this in the past, but found that: * It did not improve runtime * Using two distinct split functions introduced extra overhead and compromised code elegance * The angular *perspective* (distinct from choice of implementation) is still needed for narrative reasons, to unify our preprocessing across all component manifold curvatures We initially selected simple hyperparameters to facilitate direct comparison between split geometries with minimal confounding. Specifically, we set maximum tree depth to $\\log\_2 32 \= 5$ to match the 32 clusters in our Gaussian mixture experiments. We recognize, however, that the thorough hyperparameter investigation suggested in the initial review could strengthen our work. To further investigate the role of hyperparameters, we ran two more benchmarks: * Decoupled hyperparameter sweep (random search, 5-fold CV, H4 x E4 x S4) * Maximum depth sweep (grid search, same manifolds as Q1 benchmarks) We tested the same choice of 30 random hyperparameters for each model under 5-fold CV, recording the best CV score and test set accuracy for each method. Due to time and computing constraints, we elected to use one signature (H4 x E4 x S4), reasoning that a signature that was not evaluated previously in the paper would prevent biasing our evaluation in favor of any particular model. Our search space was: * `n_estimators`: \[3, 6, 12, 24\] * `min_samples_split`: \[2, 4, 8, 16, 32\] * `min_samples_leaf`: \[1, 2, 4, 8, 16\] * `max_depth`: \[1, 3, 5, 7, 9, None\], * `max_features`: \["sqrt", "log2", None\] In general, we find that all methods respond to hyperparameters similarly, except for the single-manifold RF baseline, which underperformed in general; product space random forests were the best-performing overall. We speculated that Product RFs’ preference for higher values of `min_samples_leaf` and `min_samples_split` may suggest that it has an easier time partitioning the manifold. To corroborate this, we swept over `max_depth` values using a different set of manifolds, finding that Product RF performance saturates earlier than other models for some manifolds. [Decoupled hyperparameter sweep scores](https://postimg.cc/Y4mDr9sp) [Max depth sweep figure](https://postimg.cc/Vd24v1Q4) **Q4: Signature choice.** We refer you to our rebuttal to reviewer **3Gbp**: subheading “Signature selection” details how we selected signatures for our benchmarks; subheading “Relationship to signature selection and embedding generation" lays out our philosophy on signature selection in this paper.
Summary: The manuscript develops a methodology for creating decision trees and random forests (classifiers or regressors) by assuming the data coordinates can be decomposed into products of hyperbolic, hyperspherical, or Euclidean components. It is shown that each of those spaces belongs to a class of "constant curvature manifolds" and how, in each, the decision boundary can be written in terms of a threshold on an angle (where the different geometry of the manifolds contribute to different angle equations). Then, the product decision tree (and product random forest) can be expressed in terms of the local decision trees. The manuscript demonstrates the method on a range of benchmarks, with extra effort required to establish how many real-world benchmarks can be expressed as sampled from a product manifold. Claims And Evidence: The claims on constructing decision trees in various constant curvature manifolds are clear and convincing, the the construction of a product decision tree from them looks reasonable. Methods And Evaluation Criteria: The manuscript's evaluation criteria are highly biased toward problems in which the input coordinates can be formulated as products of constant curvature. Indeed, the authors make an effort to demonstrate how to cast many problems into those terms. Still, this bias makes the evaluation tricky and prevents reviewers from asking, "How does it perform on [reviewers-favorite-benchmark]?". The results are good, but only marginally, i.e., comparable to competing methods. Theoretical Claims: Looks valid to me. Experimental Designs Or Analyses: No. Supplementary Material: No. Relation To Broader Scientific Literature: The background on DT and RF is well presented, and the main contribution is clearly explained. Essential References Not Discussed: Not that I could see. Other Strengths And Weaknesses: Strengths: * Presenting a unified view of constant-curvature manifolds and a novel formulation for decision trees in Euclidean, Hyperbolic and Hyperspherical cases. * Formulation of product decision trees. * Ideas for how to apply the presented theoretical construct to real-world cases Weaknesses: * There is a lack of clarity regarding when the method is applicable and when we can expect results that are competitive with other methods. That is, while the method obviously works well on product spaces, it is not clear when those are useful approximations in real-world problems. Other Comments Or Suggestions: None. Questions For Authors: * Can you provide a list of types of problems where the method is applicable per your experience? This would improve the ability to apply your method and might steer future research into extending this list. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their favorable comments on our manuscript, including our “**clear and convincing**” claims, “**good results**,” and the “**extra effort** required to establish” performance on real-world benchmarks **Selection of benchmarks.** Our work makes the admittedly strong assumption that we are working with product manifold-valued data, and our method relies on having product manifold-valued inputs in hand. Although we have tried to create as broad a set of benchmarks as possible, we acknowledge that this can be a limitation of the product manifold approach to machine learning as a whole. In our paper, we try to model what end-to-end approaches to machine learning on product manifolds might look like, including examples of how different types of datasets can be converted to product manifold coordinates. We hope that, in addition to supporting the value of mixed-curvature decision trees, our paper can serve as an instructive guide for researchers who are intrigued by product manifolds but aren't sure how these concepts apply to their own datasets. An additional advantage of product manifolds is they recover single-manifold and Euclidean geometry as special cases: thus, all single-manifold problems can trivially be viewed through the product manifold lens. More generally, \[1\] describes a method based on sectional curvature to determine whether a graph warrants a product space representation; in such cases, we would expect: * Product-space representations to model the data substantially better (in terms of distortion, classification accuracy, etc) than single-manifold representations, and * Our methods to be effective for classification/regression We always welcome suggestions for any further benchmarks that can be formulated in terms of product manifolds, particularly those from application areas not included in our original manuscript. **Suggestions for applications.** Per the reviewer’s request, we also provide an incomplete list of some situations in which product manifold approaches can be helpful: * Many non-Euclidean machine learning problems involve multiple manifolds. For instance, \[2\] describes hyperbolic convolutional neural networks, in which each channel is hyperbolic. To combine representations in multiple channels, they propose a variant of vector concatenation that stays on the hyperboloid. An alternative way to view this would be that the channels form a product space of individual hyperbolic representations. * Similarly, pairwise (or higher-order) interactions between data points with non-Euclidean representations can be recast in terms of product manifolds. An example of this is our reformulation of link prediction as binary classification on a product manifold of (outbound node, inbound node) pairs. * To embed pairwise distances or other pairwise dissimilarities, one may use the coordinate learning method described in \[1\]. This is a popular approach to embedding graphs, where heterogeneous curvature can make single constant-curvature manifolds inadequate. Our graph benchmarks follow this approach. * To embed features without known pairwise distances, mixed-curvature VAEs can be useful, as in \[3\]. * We are particularly excited about biological applications of product manifolds, which have been a popular application area for hyperbolic deep learning \[4\], and where complicated latent structures can give rise to heterogeneous curvature. For instance, we follow \[5\] in using mixed-curvature VAEs to embed single-cell transcriptomics data. We speculate that in such datasets, differentiation trajectories may embed in hyperbolic space, whereas periodic signals (e.g. those pertaining to the cell cycle) embed in hyperspherical space. Empirically, \[5\] finds that single-cell data embeds better into product manifolds than single manifolds, and \[6\] embeds pathway graphs in product manifolds. **References.** \[1\] Gu et al (2019). Learning Mixed-Curvature Representations in Product Spaces. \[2\] Bdeir et al (2024). Fully Hyperbolic Convolutional Neural Networks for Computer Vision. \[3\] Skopek et al (2020). Mixed-curvature Variational Autoencoders. \[4\] Khan et al (2025). Hyperbolic Genome Embeddings. \[5\] Tabaghi et al (2022). Linear Classifiers in Product Space Forms. \[6\] McNeela et al (2023). Mixed-Curvature Representation Learning for Biological Pathway Graphs. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I will keep my favorable review and score.
Summary: This paper proposes mixed-curvature decision trees (DTs) and random forests (RFs) for data embedded in product manifolds—combinations of hyperbolic, spherical, and Euclidean spaces. The core algorithm selects the geodesic split from three options (hyperbolic, spherical, Euclidean) with highest information gain. Claims And Evidence: - Performance claims are seemingly contradicted by results in Table 1, where product RFs underperform compared to single-manifold RFs in some cases. - No proofs or formal guarantees are provided for the decision boundary derivations or optimization steps to support the theoretical foundation. - Motivation for product manifolds not sufficiently justified; theoretically-solid alternatives like ensembles of single-manifold trees are not discussed. Methods And Evaluation Criteria: - Synthetic datasets are not described in detail. - F1/accuracy scores are omitted which clearly reduces interpretability of the results. Theoretical Claims: 1. The decision boundary is unclear. 2. It is unclear how geodesic splits are computed for product manifolds (e.g., interactions between geometries?). 3. The tangent space definition is presented without context, relevance to the algorithm, and does not directly lead to a tangent RF. 4. Theoretical guarantees for convergence or optimality are absent. 5. No theoretical justification is given 6. Product-manifold decision boundaries are not defined. The paper relies on a ad-hoc approach instead. Experimental Designs Or Analyses: - Product MLP/GCN baselines are confusing and not explained. - Tangent RFs are unclear in design. - Comparing to tangent RFs is not explained. - Competitors are not well explained - Lack motivation for manifold configurations (e.g., why specific curvatures). Supplementary Material: - Appendix B: Algorithm 1 describes the core split-selection process but omits curvature parameter handling. - Appendix C: Synthetic results show unclear performance trends / mixed performance; no ablation on manifold combinations. Relation To Broader Scientific Literature: - Builds on mixed-curvature learning but does not cite __specific__ foundational works or sections, but generally refers to a single source for its foundation without specific attribution. Essential References Not Discussed: ./. Other Strengths And Weaknesses: #### Strengths - Novel integration of mixed-curvature geometries into DTs/RFs. - Potential for applications in hierarchical or graph-structured data. #### Weaknesses - Poor attribution (e.g., Riemannian geometry basics cited via a single source instead of specific papers). - Ambiguous algorithm description (e.g., tangent RFs). - Writing lacks clarity, particularly in Sections 2.1–2.2. - The **benefits over an RF ensemble of single-manifold DTs** (hyperbolic + spherical + Euclidean) is _very unclear_. This RF ensemble approach is much cleaner and supported by reliable theory. Other Comments Or Suggestions: - Remove tangential details (e.g., PyTorch `arctan2` footnote). - Condense Sections 2.1–2.2; - Add some background on DTs. - Define a decision boundary and decision rule for product manifolds for DTs. Questions For Authors: - How are curvature parameters selected for synthetic/data-driven manifolds? - Why compare to tangent RFs? How do they differ from product RFs? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for their time and for acknowledging our “novel integration of mixed-curvature geometries into DTs/RFs” and its “potential for applications in hierarchical or graph-structured data,” and for praising its effectiveness in single-manifold settings. **Added benchmarks** Per the reviewer’s suggestions, we have re-run our benchmarks: - We have added ensembles of single-manifold trees, which generally underperform our method - We have included F1-macro scores for classification - For transparency, we include all neural models. Accuracies, F1 scores, and MSEs can be found at [https://postimg.cc/gallery/ZbfzMzR](https://postimg.cc/gallery/ZbfzMzR). **Clarifying benchmark assumptions** The reviewer suggests that Table 1 contradicts our performance claims. Table 1 shows that our method does well *both* in problems involving single-curvature and product manifolds; these are different problems, not different methods; therefore, they cannot be contrasted in this way. In particular, our benchmarking setup assumes a manifold is already given. For instance, given a dataset with signature H2 x E2 x S2, our only choices are: - Coerce to E8 (Ambient models) - Coerce to E6 (Tangent models) - Coerce to one of {H2, E2, S2} per tree (single-manifold tree ensemble) - Our method Of the above, ours is *the only method* for training a decision tree *without* coercing to a different geometry. For details about our selection of signatures, please refer to our discussion with 3Gbp, subheading “Signature selection.” **Comparison to single-manifold decision trees** We justify our choice not to use single-manifold trees in our paper: > Letting a single DT span all components—as opposed to an ensemble of DTs, each operating in a single component—allows the model to independently allocate its splits across components according to their relevance to the task at hand Nonetheless, we agree it is better to verify. In our benchmarks, these ensembles tended to underperform other models, including ours. Furthermore, because designing an ensemble of single-manifold decision trees relies on the constant-curvature decision trees we describe in Sections 3.1–3.3, this method is still a novel contribution of this paper. **Theoretical justification** - For an elaboration on geodesic convexity and decision boundaries, please refer to our discussion with SX2z, subheading “Theoretical justification for splits.” In brief, we explain and commit to updating our manuscript with: - A definition of geodesic convexity - A proof sketch for why geodesic splits partition manifolds into geodesically convex subspaces - The equation for geodesic splits in product manifolds - Citations for each component manifold being split by homogeneous hyperplanes - An explicit connection between geodesic splits and our angular reformulation, clarifying that our splits are fully contained within one of the component manifolds, bypassing the complex interactions between component manifold geometries - A more explicit restatement of the relationship between this and the discussion of geodesic convexity in hyperbolic decision trees (HyperDT) - Optimization details: - We modify CART, a greedy algorithm without optimality guarantees \[1\]. The overall behavior of the algorithm is left intact. Moreover, constructing optimal DTs is known to be NP-hard \[2\]. - Tangent DTs/RFs: - We directly use the method used in \[3\]: apply the log-map at the origin and then train Euclidean DTs/RFs on that. We highlight Section 4.3 for a description. - Eqs. 1 and 12 are necessary for extending the log map to product manifolds, enabling the use of tangent DTs/RFs **Clarifying writing** - We commit to rewriting the caption for Table 1 to make it clearer that the manifolds represent aspects of the *problem*, not of the *approach* used, to prevent future confusion - We will consolidate Section 2 for clarity; more specific editing suggestions are greatly appreciated. - Model details: detailed descriptions for Tangent DTs/RFs, as well as neural net benchmarks, can be found in Appendix E; we will expand Appendix E to be even more detailed. - We will add the citations from our reply to SX2z. **Algorithm 1 curvature handling** Alg. 1 intentionally omits curvature handling because only the sign of the curvature (i.e. manifold type) is needed to compute midpoint angles under our formulation. **Clarifying Appendix C** The reviewer claims “Synthetic results show unclear performance trends / mixed performance; no ablation on manifold combinations” in Appendix C. However, Appendix C is actually a proof of equivalence for the Euclidean case. We request the reviewer clarify this statement. **References:** \[1\] Hastie (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. \[2\] Hyafil and Rivest (1976). Constructing Optimal Binary Decision Trees is NP-Complete. \[3\] Chlenski et al (2024). Fast Hyperboloid Decision Tree Algorithms.
null
null
null
null
null
null
Hierarchical Graph Tokenization for Molecule-Language Alignment
Accept (poster)
Summary: Previous LGLMs usually focus on the node level of molecules, ignoring the structure information in molecules. To address this, the paper proposes a novel strategy called Hierarchical Graph Tokenization (HIGHT), which uses a hierarchical graph tokenizer to encode the hierarchy of atom, motif, and molecular levels of informative tokens to improve the molecular perception of LLMs. The paper first uses VQ-VAE to obtain embeddings for atoms and motifs. Then, it concatenates the motif embeddings after the atom embeddings to feed into the models. The paper also uses a new augmented instruction tuning dataset, HiPubChem, enriched with the hierarchical graph information, to further enhance the molecule-language alignment. The paper also proposes the first hallucination benchmark MotifHallu based on common functional groups. Extensive experiments show the effectiveness of HIGHT in reducing hallucination by 40%, and significant improvements in various molecule-language downstream tasks. Claims And Evidence: 1. The paper claims they are the first to incorporate the hierarchical graph information into LGLMs. To my best knowledge, it seems to be true. However, the paper needs to include a related work section to compare itself against other previous models. Some other papers also try to capture hierarchical information in a similar way, such as [1,2]. A related work section can help authors future clarify their novelty. 2. The hallucination benchmark MotifHallu seems to be problematic. Given that the proposed methods already contain those function groups in their input, it is relatively more straightforward for such a model to answer correctly. [1] Li, S., Liu, Z., Luo, Y., Wang, X., He, X., Kawaguchi, K., ... & Tian, Q. (2024). Towards 3d molecule-text interpretation in language models. ICLR 2024. [2] Liu, Z., Li, S., Luo, Y., Fei, H., Cao, Y., Kawaguchi, K., ... & Chua, T. S. (2023). Molca: Molecular graph-language modeling with cross-modal projector and uni-modal adapter. EMNLP 2023. Methods And Evaluation Criteria: The method to obtain node/motif embedding is based on Hierarchical Molecular Graph Self-supervised Learning (HiMol) https://www.nature.com/articles/s42004-023-00825-5. While it looks fine to use VQ-VAE to obtain motif embedding, it is unclear why the paper chose the VQ-VAE to obtain the node embedding instead of the regular SMILES/SELFIES string. It might be better to include an ablation study to use SMILE/SELFIES string for atom embedding to make a fair comparison for results in Table 2. Additionally, using VQ-VAE to obtain the embeddings seems to use more computational resources. The paper needs to report its GPU hours, parameter number, and other computation-related information to make a fair comparison. Otherwise, the proposed method might not be scalable. Theoretical Claims: N/A Experimental Designs Or Analyses: The paper seems to include relatively comprehensive baselines and downstream tasks to compare against. The paper also conducts ablation study and zero/few-shot testing. However, some problems still exist. 1. In Table 1, the paper compares the InstructMol-? against HIGHT-?. However, based on the ablation in this table, it seems that the performance gain is mainly from the instruction tuning dataset instead of the actual hierarchical tokenization part, which decreases the novelty of the proposed model. GIMLET, a method the paper also used as baseline in Table 3, should be added as a SMIES-based LGLM for Motif hallucination experiments to show the contribution of the hierarchical tokenization. 2. In Table 4, the paper claims that the paper fails to achieve SOTA because some of the previous methods used a large amount of pertaining data. However, the paper also used a large amount of instruction tuning data (MoleculeNet, CheBI-20, etc) in the second stage, which seems to be exactly the same as those methods. The paper needs to do some comparison between those two instruction tuning data. Otherwise, the previous methods seem to be better due to less parameters and simpler architecture. Supplementary Material: Yes, the paper provides dataset details, additional experiment results, and additional experiment details. Relation To Broader Scientific Literature: The paper provides a new representation learning strategy for molecules. However, the paper fails to include a related work section, which makes readers confusing. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses: 1. The paper did not provide any code for reviewers to check. Other Comments Or Suggestions: N/A Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer VbFG, Thank you for your time and suggestions to our work. Please find our detailed responses to your questions below: > Related work section Thank you for acknowledging our contribution as the first to incorporate the hierarchical graph information. As for the referred works, we have revised Sec 2 as the related work section, and included the discussion of existing attempts: Existing works also try to enrich the molecule-language alignment with additional modalities, such as 2D[2] and 3D[1] information. In contrast, we focus on the intrinsic hierarchical information of the molecules, such as motifs. > It is relatively more straightforward given the function groups in the inputs We kindly refer Reviewer VbFG to Table 17 in the appendix, where we can find that, even with the functional groups in the inputs (i.e., HIGHT w/o HiPubChem), it remains challenging for LGLM to correctly recognize whether the functional groups exist in the input molecule. In addition, it is not straightforward that the hierarchical alignment across atom-functional group-molecule is beneficial to broader downstream tasks. > Why choose VQVAE to obtain node embedding We choose to use VQVAE because it is one of the simplest and standard architectures (e.g., Mole-Bert and other LGLMs as in [here](https://anonymous.4open.science/r/HIGHT-rebuttal-82B1/comparison_LGLMs.md)). We would like to kindly note that, when adding SMILES/SELFIES to LLMs, it does not directly output node embeddings. Nevertheless, in Table 16 and 17, we also consider adding SELFIES to LLMs, which slightly decreases the perception of functional groups, and increases the chemical reaction prediction performances. > Computational overhead We have revised our manuscript to include the computational overhead report, including training and inference latency, as well as tunable parameters, which can be found [here](https://anonymous.4open.science/r/HIGHT-rebuttal-82B1/computation.md). Although HIGHT requires longer training time and relatively higher tunable parameters, the absolute values are not high. Moreover, during inference, as LLM latency consumes most of the computation, HIGHT can even reduce the inference latency by generating more concise answers. > Performance gain in Table 1 We kindly refer Reviewer VbFG to Table 17 in the appendix, where we can find that, merely using the HiPubChem for instruction tuning with InstructMol still suffers from severe hallucination. It means that both tokenizers and the instruction tuning in the alignment phase are necessary to reduce hallucination. In addition, in Table 16, we can find similar phenomena in downstream tasks of chemical reaction prediction, which further strengthens the necessity for the two components. > GIMLET in MotifHallu We have evaluated two state-of-the-art SMILES based LGLMS, i.e., GIMLET and Galactica-6.7B in MotifHallu. The results are given [here](https://anonymous.4open.science/r/HIGHT-rebuttal-82B1/motif_hallu.md). Interestingly, those models demonstrate a high hallucination of negative classes. Therefore, we recommend reporting macro F1 scores (F1 scores averaged across classes). **HIGHT demonstrates significant improvements in terms of macro F1 scores over all baselines, up to 14**. > Comparison with the instruction data We provided a comparison between the instruction tuning data used in HIGHT and other LGLMs [here](https://anonymous.4open.science/r/HIGHT-rebuttal-82B1/comparison_LGLMs.md). It can be found that HIGHT uses significantly less information and data to achieve relatively good capabilities across multiple tasks. > code We have provided AC an anonymous link to our code, and kindly asked AC to share to the reviewers according to the ICML policy. --- Rebuttal Comment 1.1: Comment: Thank you for your comments. Unfortunately, I cannot see the updated version of the paper. However, given the updated anonymous link, I think the current clarification is enough. Therefore, I decided to raise my score to 3. --- Reply to Comment 1.1.1: Comment: Hi Reviewer VbFG, Thank you for acknowledging our clarification and for agreeing to change the score. Please feel assured that all the promised revisions and new discussion/results will be included in the updated version (which has already been done for our updated manuscript). Thank you again for your time and constructive comments, which helped a lot to improve our work! Best, Authors
Summary: This paper proposes HIGHT, a novel molecular graph tokenization and post-training framework for applying large language models to molecular graphs. The paper proposes a novel hierarchical tokenization method incorporating molecular motif information, and uses novel alignment pretraining strategy to train models before task-specific instruction tuning. Experiments and ablation studies show that this method achieves excellent performance in various molecular property prediction, molecular description and chemical reaction prediction benchmarks. ## update after rebuttal My concerns have been well addressed so I will keep my rate. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing experiment results. Methods And Evaluation Criteria: Yes, proposed methods and evaluation criteria make sense for molecular property prediction, molecular description, and chemical reaction prediction problem. Theoretical Claims: There are no proofs or theoretical claims in the paper. Experimental Designs Or Analyses: Yes, I have checked experiment settings, and experiment results and analyses in motif hallucination, molecular property prediction, molecular description and chemical reaction prediction experiments. All experiments are sound and valid to me. Supplementary Material: Yes, I have reviewed the part of instruction tuning datasets and prompts in Appendix A. Relation To Broader Scientific Literature: The novelty contributions and empirical findings are very useful and valuable to broader chemical and biological science domains, as this paper proposes an effective method to make better use of the power of large language models on molecule data, potentially providing excellent technical tools for scientists in these domains for their domain-specific applications. Essential References Not Discussed: There are no essential related works not discussed. Other Strengths And Weaknesses: No other comments on strengths and weaknesses. Other Comments Or Suggestions: No other comments or suggestions. Questions For Authors: While the proposed method makes valuable contributions in easing the application of large language models to molecules, one significant gap still exists --- the proposed method cannot encode 3D structure information of molecules into sequence representations, which may impacts molecular prediction performance (e.g., results in Table 2 are not as good as state-of-the-art 3D graph neural network models). I would like to hear author's opionion about how could the proposed the method be extend to encoding 3D molecular structures? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer zQSo, Thank you for your time and insightful suggestions, as well as your acknowledgment of the value and convinceness of our work. Following your suggestions, we have revised our manuscript to include a discussion on future work regarding extending HIGHT to incorporate 3D information. - **Hierarchical 3D Tokenizer**: To incorporate the 3D hierarchical molecular information, we need to design a new tokenizer based on GNN backbones such as EGNN and SE(3) Transformers compatible with 3D information. For example, we can extend the hierarchical VQVAE in HIGHT, to accommodate 3D properties of motifs, such as SE(3) equivariance to the motif representations. - **Scaling 3D Data for Alignment**: When extending to 3D information, one could curate both the molecules, proteins and RNA data. The motifs in proteins then can be certain amino acids. In RNA, the motifs can be certain recurrent structures. - **3D Tokenizer Training**: To train the tokenizer to fully capture the 3D molecular information, we can design several self-supervised learning objectives such as 3D spatial position recovery, masked atom/motif prediction. - **3D Alignment Tuning**: To facilitate the alignment of molecule and language, we need to extend the instruction tuning dataset to include descriptions of 3D molecular properties, such as 3D positions. In addition to incorporating the 3D hierarchical information, we also need to incorporate language captions of the 3D hierarchical structures, such as properties of the amino acids. In addition, one could also incorporate pairwise hierarchical graph information, such as binding affinities of molecules to proteins. Therefore, it is promising to align 3D graphs with languages for broader scentific tasks, for which HIGHT provides the foundation for incorporating 3D hierarchical graph information for better alignment of graphs and languages.
Summary: The paper introduces a new representation of graphs (specifically, molecules) for the purpose of tokenization for LLMs. The key aspect of the new representation is that it not entirely just node-based, rather it captures features in the graph at both the node and the motif level. It is not clear to me whether there are actually more than two models in the hierarchy, and how the different levels communicate. The authors demonstrate impressive performance on several benchmark datasets. However, setting up a model of this type involves several components, including the LLM itself and fine tuning, so it is not entirely clear what fraction of the improvement comes from the hierarchical tokenization itself. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: n/a Experimental Designs Or Analyses: Attempted to, to the extent that the experimental process of a complex pipeline involving LLMs can be verified based on just a paper. Supplementary Material: Briefly Relation To Broader Scientific Literature: The authors survey the literature in some depth Essential References Not Discussed: There is a recent paper titled "Exploring Hierarchical Molecular Graph Representation in Multimodal LLMs" by Hu et al that addresses the same problem but does not seem to be cited. This might just be because the paper only appeared on ArXiV in November of last year. Nonetheless, I would very much like the authors to compare the two approaches. Other Strengths And Weaknesses: The hierarchical graph representation itself is not described in detail. Other Comments Or Suggestions: - The intro seems to have been a little hastily written and its English could be improved. - The caption of Fig 1 says "all metrics have been improved a bit". What does this mean?? - Rather than "high order molecular structures" I would say "higher order molecular structures". - In line 200, "concrete" should not be used as a verb Questions For Authors: 1. How does the hierarchical graph representation relate to BRICS exactly? 2. Does the representation have just two levels of hierarchy or more? 3. How do the different levels communicate with each other? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer brfq, Thank you for acknowledging our performance improvements and constructive suggestions. Please find our explanations to your questions below: > it is not entirely clear what fraction of the improvement comes from the hierarchical tokenization itself. We kindly refer Reviewer brfq to Figure 3c and Table 16 in the appendix, where we conduct comprehensive ablation studies to analyze the contributions of each component in HIGHT. The differences between HIGHT and InstructMol lie in the positional encoding, hierarchical instruction tuning, and the hierarchical tokenizer. Thus, we study the performances of InstructMol plus those components, where each can also be considered as a variant of HIGHT without hierarchical tokenization and some components. In experiments, one can find that, compared to HIGHT, merely incorporating positional encodings, using the hierarchical instruction tuning, or using a larger tokenizer in InstructMol can not bring improvements or even lead to decreased performance. It demonstrates that, **while each component in HIGHT is critical to the performance, the hierarchical tokenizer contributes most to the alignment performance**. In addition, in Figure 3a, we also experiment with different base LLMs such as vicunna and Llama2, where HIGHT demonstrates consistent improvements. > Discussion between Hu et al. 2024. First, we would like to kindly refer Reviewr brfq to the ICML reviewer instruction, that ``Authors cannot expect to discuss other papers that have only been made publicly available within four months of the submission deadline.''. Even Hu et al., came out in November 2024, it is within four months of the ICML deadline. Nevertheless, we will revise our manuscript to include a discussion with Hu et al. Although both of our works incorporate the hierarchical information into LGLMs, - Hu et al., does not consider the influence of the hierarchical information on the molecule-language alignment, i.e., hallucinations. - They also do not consider incorporating the hierarchical information into the alignment tuning phase. - They evaluate the usefulness of the hierarchical graph information in a limited number of tasks. > Questions related to hierarchical graph representation Hierarchical graph representation refers to **aggregating the low-level (i.e., node-level) into meaningful higher-level abstractions, usually via subgraph aggregation.** For example, in a social network, the higher-level abstraction could be a small group or a community. Although it is not limited to a restricted number of abstraction layers, in molecules, the hierarchy of atom-motif-molecule is the usual schema of the abstraction[1]. Hence, **we simply follow the common practice in the community to obtain the hierarchical molecular representations, i.e., using BRICS to identify common functional groups[2,3]**. Nevertheless, one could also apply subgraph learning strategies to enable more flexible hierarchies. In fact, when going beyond small molecules to biomolecules such as proteins, more abstract subgraphs, such as the 3D folded architecture, or combined 3D structure between small drugs and protein pockets, are also critical to the functionalities of proteins[4]. > Other minor suggestions We have revised our manuscript according to your suggestions. For the caption of Fig 1: HIGHT brings improvements to all tasks. Due to the heterogeneity of the evaluation metrics, we transform the numerical values a bit for better visualization. The new Figure 1b and the corresponding details are given [here](https://anonymous.4open.science/r/HIGHT-rebuttal-82B1/overview.md). **References** [1] The art and practice of structure-based drug design: A molecular modeling perspective. Medicinal Research Reviews 1996. [2] Motif-based graph self-supervised learning for molecular property prediction, NeurIPS'21. [3] Molecular representation learning via heterogeneous motif graph neural networks, ICML'22. [4] Independent SE (3)-Equivariant Models for End-to-End Rigid Protein Docking, ICLR'22.
Summary: This paper aims to address the issue of tokenization in existing LGLMs (large graph-language models) that neglect the essential hierarchical structures inherent in molecules. The hierarchical structures are reflected as motifs or functional groups that are subgraphs within the larger molecular graph. The proposed solution is to introduce "motif tokens" as input to LLMs, in addition to the atom tokens and text tokens as usual. The "motif tokens" are obtained by an existing VQVAE approach (Xia et al., 2023). The evaluation of the proposed method is conducted across various downstream tasks, including property classification/regression, molecular captioning, and a custom-designed task focused on answering yes/no questions regarding the presence of functional groups. Claims And Evidence: The paper claims that Hierarchical Graph Tokenization could improve LGLM understanding of molecular structures, thereby benefiting various downstream tasks. However, in molecular property prediction (classification), the proposed method performs significantly worse than other baselines (though slightly better than LLaMA-2 in some cases). A similar lack of supporting evidence is observed in chemical reaction prediction tasks, where the proposed method fails to demonstrate substantial advantages. This weakens the argument that introducing motif tokens inherently improves molecular understanding across tasks. As shown in Figure 1b, the proposed method significantly outperforms baselines only on MotifHallu, the benchmark dataset designed specifically for motif-related tasks. However, the proposed method has been specifically designed to enhance functional group-related understanding by introducing "motif tokens" as input to LLMs. By explicitly incorporating these motif tokens, the input itself becomes aware of functional groups, making it understandable that the model performs more accurately on motif-specific questions. But it cannot address the key question: how the inclusion of motif tokens contributes meaningfully to other downstream tasks beyond functional group recognition. Methods And Evaluation Criteria: The concept is presented as “Hierarchical Graph Tokenization”, but in practice, its implementation is limited to adding motif embeddings as tokens alongside atom tokens in the input to LLMs. Notably, motifs are pre-identified using the Breaking of Retrosynthetically Interesting Chemical Substructures (BRICS) algorithm, rather than being learned within the model itself. Furthermore, atoms and motif tokens are obtained separately using distinct VQVAEs, meaning that hierarchy is not explicitly reflected in the tokenization process. Additionally, the tokenization of both atoms and motifs relies on an existing VQVAE approach (Xia et al., 2023), making the primary novelty of this work the incorporation of “k” motif embeddings via VQVAE, rather than a fundamentally new tokenization strategy. In the evaluation on MotifHallu, the metrics were not clearly defined. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: The experimental evaluation was conducted on various tasks, which are sufficient. Supplementary Material: I go through them, and have no questions. Relation To Broader Scientific Literature: If the claim could be supported, the contribution of this work can be a valuable addition to the field of LLM models for chemical tasks. Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: In “Given the imbalance of positive and negative samples, we separately report the F1 scores for different classes.” How are the F1 scores for the positive and negative classes computed? In a binary classification problem, the F1 score is typically reported as a single value, considering both classes. “Given the imbalance of positive and negative samples”, what is the ratio of positive vs negative? If it is super imbalanced, why the Acc is still reported? And Acc does not given a misleading high value? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer eiUy, Thank you for your time and insightful suggestions for our paper. Please find our responses to your concerns below. > In molecular property prediction and chemical reaction tasks, HIGHT does not demonstrate substantial advantages. We need to clarify that the performance gaps between HIGHT and the baselines lie in the differences of the base models. For example, Uni-Mol is built on a T5 architecture pretrained on large-scale 3D molecular data. GIMLET is built on a T5 pretrained on molecular property prediction-focused data. We present a [detailed table](https://anonymous.4open.science/r/HIGHT-rebuttal-82B1/comparison_LGLMs.md) listing the information and pretrained data used by HIGHT and baselines. It can be found that some "baselines" are pretrained onto significantly larger datasets with additional information, **which are not fair baselines for comparison**. We report their performances in order to have an overview of the overall progress. Therefore, **our direct baseline is InstructMol, which uses the same architecture and pretraining data**. Compared to InstructMol, HIGHT demonstrates significant improvements. > how does the inclusion of motif tokens contribute meaningfully to other downstream tasks beyond functional group recognition? The visualization of Figure 1b may introduce some perceptual biases about the improvements brought about by the different scales of evaluation metrics used in the method. Due to the heterogeneity of the evaluation metrics, we transform the numerical values a bit for better visualization. The new Figure 1b and the corresponding details are given [here](https://anonymous.4open.science/r/HIGHT-rebuttal-82B1/overview.md). Comparing InstructMol and HIGHT, indeed, **incorporating motif-related information improves other downstream tasks**. > Limitatiions of tokenization by BRICS; Hierarchy in the tokenization process; Primary novelty of the work We need to clarify that the **primary novelty of this work lies in the introduction of the hierarchical tokenization**. It is well known that motif information is critical to molecular understanding. Without that, LGLMs will exhibit high hallucination and perform subpar on molecule-language alignment. We acknowledge the limitations of BRICS in tokenization, while it's sufficient to demonstrate the usefulness of hierarchical graph information. **Despite the simplicity of our methodology, it brings significant improvements compared to LGLMs without hierarchical graph information**. Meanwhile, our method possesses a high degree of extensibility and can be augmented by incorporating advanced motif extraction techniques (such as[1,2]), or learnable motif extraction, which could further boost the performance of HIGHT. In addition, we would also kindly note a key technical contribution in HIGHT lies in not only the tokenization, but also in the **alignment tuning process via HiPubChem**. We feel the original title may introduce confusion, thus we would like to change our title to ``Hierarchical Molecule-Language Alignment'' in order to highlight the key focus of our work, i.e., the importance of hierarchical information for molecule-language alignment. > Metrics in MotifHallu We have supplemented the details in our manuscript: - To calculate the F1 scores for positive and negative classes, we first divide the motif hallu dataset according to the class label. For the positive F1 score, the calculation is normal. For the negative F1 score, we reverse the labels and predictions. - Motif Hallu is quite imbalanced. There are 4,124 positive and 19,800 negative samples. - As Acc will be biased due to the imbalance, we updated the table to report classwise averaged F1 score, i.e., macro F1. To obtain an overview of the results, we present a detailed table [here](https://anonymous.4open.science/r/HIGHT-rebuttal-82B1/motif_hallu.md) with various metrics. It can be found that, HIGHT still obtains significant improvements up to 14 in terms of macro F1. **References** [1] Motif-based graph self-supervised learning for molecular property prediction, NeurIPS'21. [2] Molecular representation learning via heterogeneous motif graph neural networks, ICML'22.
null
null
null
null
null
null
MixMin: Finding Data Mixtures via Convex Minimization
Accept (poster)
Summary: This submission addresses the optimization of data source mixtures, formulating it as a bi-level optimization problem. The key result is Theorem 3.1, which states that under certain conditions (cross-entropy or mean squared error loss, hypothesis class contains Bayes optimal models), the optimal mixture weights are the same as those for optimally mixing the outputs of Bayes optimal models for each data source. The authors then propose the MixMin method as a practical realization of the key result, where less expensive proxy models are substituted for Bayes optimal models. Experiments are conducted on language model pre-training for better downstream task performance and on transfer learning for chemical assay prediction. MixMin's data mixtures achieve larger gains than the baselines considered in these experiments. ### Update after rebuttal Thanks to the authors for their follow-up reply. The verbatim changes to the theorems and proofs are good and I approve them. I am still uneasy about reporting confidence intervals or performing statistical tests on only 3 experiment trials. I recommend that the authors take the time before the camera-ready submission to increase the number of trials to at least 5, for all experiments, and perform Welch's t-tests as in the rebuttal and/or recompute confidence intervals. Overall, I am placing not insignificant trust in the authors to carry out the revisions (including the clarifications on experiments and bi-level optimization mentioned previously) and hope that they will do so for the camera-ready version. Claims And Evidence: On the theoretical side, I have issues with the proof of Lemma 3.2 and the applicability of the results to generative models. On the experimental side, I think that 3 train-test splits is too few and have several questions about the experimental design. Please see the respective sections below. Methods And Evaluation Criteria: The proposed MixMin method makes sense for data mixture optimization thanks to the connection given by Theorem 3.1. The two experimental evaluations, data source optimization for language model pre-training and transfer learning for chemical prediction, are also sensible. Theoretical Claims: I checked the proofs of Theorem 3.1 and Lemma 3.2. I have two major issues and three minor ones. Major issues: 1. For the proof of Lemma 3.2, it is not clear how the first displayed inequality is obtained, which is key for the remainder of the proof. More steps should be shown, and perhaps the authors have to be more precise about which norm is used for functions $f$. 1. Although this is not a formal theoretical result, I do not understand the claim in lines 157-159 that "distributions for generative tasks are only over 'labels' $p(y)$." This statement should be explained as it supports the application of MixMin to language modeling. Minor issues: 1. "No input distribution shift/covariate shift" should be formally defined in Section 3.1 as it is not clear without reading the proof of Theorem 3.1. 1. Footnote 2: I do not see how the claim that "functions too far away will not have low error" follows from the near-Bayes optimality assumed in Lemma 3.2. 1. Lines 572-573: What does "left hand sign of the inequality" mean? Experimental Designs Or Analyses: I reviewed all of the experiments section (Section 6) and think the following issues are important: 1. I think 3 train-test splits is too few. It does not seem meaningful to show 95% confidence intervals from three splits. How are these computed? 1. For the language model multiple-choice tasks, why choose the loss to be the (negative) log probability of the question and correct answer (Q + A) sequence, as opposed to the probability of A given Q? It seems that evaluating the model on also generating the question Q makes the task harder than necessary. 1. I do not understand the claim that the RegMix baseline has higher computational cost. As I understand it, for RegMix, the total number of tokens for training proxy models is (3.2B / 700 tokens/model) * (7X models) = 3.2B * (X / 100), which is the same as the (3.2B X / 700 tokens/model) * (7 models) for MixMin and random search. 1. For the chemical assay datasets, I do not understand the choices of training data that were made. 1. What is meant by "baseline models"? Why are they trained on the train + validation splits, while MixMin's proxy models are trained only on the train splits? What prevents MixMin's proxy models from using train + validation also? 1. Given the MixMin mixture, does the training data for the final model include the train split (or train + validation) for the target assay as well? I have the same question as above (train only vs. train + validation) for all assays included in this final training. 1. For the chemical assay experiment, why is the natural distribution the only baseline considered? Why not include the same baselines from before: balanced distribution, random search, RegMix? Supplementary Material: I reviewed the proofs in Appendix A and looked at some of the extra tables and figures linked from the main paper. Relation To Broader Scientific Literature: I question the claim that bi-level optimization "generally only admits expensive zero-order approaches" (lines 19-21). A quick search yields survey/tutorial papers such as [Zhang et al. (2023)](https://arxiv.org/abs/2308.00788) that discuss gradient-based methods, based on implicit gradients or unrolling of the lower-level optimization. I think it is a significant omission to not at least discuss such methods. Furthermore, while gradient-based methods may make assumptions that do not hold for the data mixing problem here, it may still be possible to run the algorithms and compare results (particularly on the chemical assay datasets that I presume involve less computation than language modeling). Doing so would better contextualize MixMin's performance in terms of mixture quality and/or computational cost. Essential References Not Discussed: Gradient-based methods for bi-level optimization (please see the previous comment). Other Strengths And Weaknesses: I think the key result of Theorem 3.1 is also the main strength of the work, showing that under some conditions, it may be possible to avoid the bi-level optimization inherent to data mixture optimization and instead optimize the mixture of outputs of source-specific models. Other Comments Or Suggestions: Minor comments and questions: - Line 84, right column: Should "a set of finite source distributions" be "a finite set of source distributions," i.e., "finite" applies to the set, not the distributions? - Lines 85-86, right column: Related to the previous comment, it is unclear at this point what the notation $dp$, $dt$ means. - Data Filtering is not Data Mixing: Is data filtering (at the data source level) not a special case of data mixing where some mixture weights are set to zero and the remaining non-zero weights are equal? Is the distinction being drawn here more about data sample level vs. source level? - Lines 171-172, right column: What is meant by "the excess error of MixMin should be analyzed beyond the risk of the proxy models $\hat{f}_p$? - Lines 325-326, left column: Does "predictive loss" mean cross-entropy with respect to the answer choices? This could be clarified. - Lines 335-336, right column: Does "trained models over the first 100,000 molecules in PCBA" mean the first 100,000 molecules in each assay? - I would appreciate a little more information about the chemical assay datasets. What is the data type of the target variable? Is the data tabular or otherwise structured? - - Figures 6 and 7: I suggest labelling the x-axis as "target array" for clarity. - Lines 432-433, left column: Should Table 2 be Table 1? Questions For Authors: Please see: - Points 2-5 under Experimental Designs or Analyses - Major issues 1, 2 under Theoretical Claims (although the latter may require more involved responses) Also from "Relation to Broader Scientific Literature": - To what extent are gradient-based methods for bi-level optimization applicable? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. We discuss the main questions below, and will incorporate the other suggestions to our revised draft. > For the proof of Lemma 3.2, it is not clear how the first displayed inequality is obtained… We describe the derivation below, and will add this to our revised draft. Let $f_H$ and $f_{H^*}$ be the minimizer of the mixture $w$ in $H$ and $H^*$ respectively. By definition of $H*$, $f_{H^*}$ is Bayes optimal, and by the assumption in the lemma statement, $||f_H - f_{H^*}|| \leq \epsilon$. Now note $|DM_{H}(w)- DM_{H^*}(w)| = | Loss(f_H) - Loss(f_{H^*})| \leq C ||f_H - f_{H^*}||$ by the Lipschitz assumption. Combining the two inequalities we have $|DM_{H}(w) - DM_{H^*}(w)| \leq C \epsilon$, which gives the desired inequality. > I do not understand the claim in lines 157-159… We mean MixMin holds for the unconditional CE loss, $\int -log(f(x)) dp(x)$, which matches $f(x)$ to $dp(x)$. Mixmin also holds for conditional CE and MSE losses under no covariate shift: $\int -log(f_{y}(x)) dp(x,y)$ and $\int ||f(x) -y||_2^2 dp(x,y)$, with $dp_i(x) = dp_j(x)$ for all sources. We will add these precise statements to our revised draft. > I think 3 train-test splits is too few… We evaluated MixMin over many downstream tasks, and sources (with 3 trials for each setting). Over these hundreds of experiments we saw MixMin consistently improved over the baselines. As a sanity check, we also increased the number of trials for our PIQA experiments with 160M models to 5 trials, and found the results were consistent. See the figures here: https://anonymous.4open.science/r/MixMin_LLM_extra_trials-54C6 > Why choose the loss to be the (negative) log probability of the question and correct answer… The A|Q task is a conditional CE task that can have covariate shifts (different probability of Qs among sources): MixMin does not apply. Hence we chose to learn A+Q which is an unconditional CE task and so MixMin applies. See our response to Reviewer KxeH for more discussion on the issue of non-convexity for covariate shift data mixing. > I do not understand the claim that the RegMix baseline has higher computational cost… The difference is the evaluation cost. RegMix requires evaluating 7X models (where X can be 10 in our experiments) but MixMin requires evaluating just 7. > What is meant by "baseline models"? Why are they trained on the train + validation splits, while MixMin's proxy models are trained only on the train splits?.. We meant models (XGBoost) trained solely on the downstream task data, using the train + validation splits for that task. MixMin models are trained on the source datasets We left the validation sets untouched to allow for unforeseen future experiments that require a validation set (e.g., additional hyperparameter tuning on the MixMin mixture). In this paper the default XGBoost hyperparameters on the MixMin mixture already worked better than the baseline model (with hyperparameters selected from cross validation), so they were unused. > Does the training data for the final model include the train split (or train + validation) for the target assay as well? No, it does not include the target assay. The train split of the target was only used to tune the MixMin weights. > For the chemical assay experiment, why is the natural distribution the only baseline considered?... Our language modeling experiments already showed the other baselines often performed worse than natural. We have now tested RegMix and Random Search on the chemistry experiments. We observed the same trends as natural; the baselines become worse while MixMin becomes better with increasing sources (only tested 3 targets for 1280 sources due to compute). The figures are here: https://anonymous.4open.science/r/MixMin_Chem_More_Baselines-286C/ > To what extent are gradient-based methods for bi-level optimization applicable? The methods from this literature were not applicable to our problem, so we did not discuss them. To clarify this we plan to include the following paragraph in our revised draft: “Our setting for Bi-Level optimization presents several challenges for previous approaches, including those used for other hyperparameter optimizations. Firstly, the inner-optimization is over an arbitrary function space (e.g., non-parametric models like XGBoost), and so lacks the parametric gradients needed for many methods [1,2]. Alternative constraint based approaches through KKT suffer similar issues given the non-parametric space of the inner optimization [3]. For many model classes, one could parameterize the inner optimization, but we then lose the necessary convexity for these methods (e.g., Neural Networks).” [1] K. Ji et al “Bilevel optimization: Convergence analysis and enhanced design” [2] F. Pedregosa, “Hyperparameter optimization with approximate gradient” [3] C. Shi et al “An extended kuhn–tucker approach for linear bilevel programming” --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their responses. Below are my follow-up comments: 1. **Proof of Lemma 3.2:** I think I understand now the justification for the first inequality. However, my review of this was hindered by the different notations for $DM$: $DM(\lambda, \mathcal{H})$ in the (Data Mixing) equation, $DM_{\mathcal{H}}(\lambda, dt)$ in Lemma 3.2, $DM_{\mathcal{H}}(w)$ in the rebuttal. The notation should be made consistent, in addition to writing down the additional steps in the derivation. 1. **Claim in lines 157-159:** I think now that there are three cases: MSE, "conditional" CE involving two variables $x, y$, and "unconditional" CE involving only $x$ or only $y$. These should be distinguished and precisely defined (with equations). Then I would like to see the statement and proof of Theorem 3.1 be refined to reflect these three cases, where the "no covariate shift" assumption should also be clarified and is needed only for the MSE and conditional CE cases. This will also better justify the use of the joint Q + A loss for the language model experiment rather than the conditional A | Q. 1. **Train-test splits and confidence intervals:** I do not understand why the confidence intervals in e.g. Figure 3 in the paper, with 3 trials, appear no larger and in some cases clearly smaller than the confidence intervals in the corresponding rebuttal figure with 5 trials (for example, the blue MixMin bar in Figure 3). Thus, I still question whether confidence intervals obtained from only 3 trials are meaningful. The rebuttal addresses my other questions about the experiments and gradient-based methods for bi-level optimization, provided that the clarifications are added to the paper. Overall, I am maintaining my score, to be interpreted as "minor revision", because I would personally like to see a revised manuscript that addresses remaining issues 1-3 above before recommending acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for your response! We provide below the verbatim updated text as requested, though are not allowed by ICML policy to share a revised manuscript. > I think I understand now the justification for the first inequality. However, my review of this was hindered by the different notations for DM… Sorry for the notation change, we were character limited in our original response. Below is the derivation with consistent notation, which will be included in the proof. Let $f_{H,\lambda}$ and $f_{H^*,\lambda}$ be the minimizer of the mixture $\lambda$ in $H$ and $H^*$ respectively. By definition of $H^*$, $f_{H^*,\lambda}$ is Bayes optimal, and by the assumption in the lemma statement, $||f_{H,\lambda} - f_{H^*,\lambda}|| \leq \epsilon$. Now note $|DM_{H}(\lambda,dt) - DM_{H^*}(\lambda,dt)| = | \int \mathcal{L}(f_{H,\lambda}(x),y) dt(x,y)- \int \mathcal{L}(f_{H^*,\lambda}(x),y) dt(x,y)| \leq C ||f_{H,\lambda} - f_{H^*,\lambda}||$ by the Lipschitz assumption. Combining the two inequalities we have $|DM_{H}(\lambda,dt) - DM_{H^*}(\lambda,dt)| \leq C \epsilon$ which gives the desired inequality. > I think now that there are three cases … Then I would like to see the statement and proof of Theorem 3.1 be refined to reflect these three cases, where the "no covariate shift" assumption should also be clarified and is needed only for the MSE and conditional CE cases. Below are the refined theorem statements and proofs. Note the proof is mostly the same except for an additional line clarifying the unconditional CE case. **Theorem 3.1**: Let the loss function for Data Mixing be unconditional CE, $\int_{\mathcal{X} \times \mathcal{Y}} \mathcal{L}\left(f(x),y\right) dp(x,y) = \int_{\mathcal{X}} -log(f(x)) dp(x)$, or conditional CE or MSE with no covariate shift, $\int_{\mathcal{X} \times \mathcal{Y}} -log(f^{y}(x)) dp(x,y)$ or $\int_{\mathcal{X} \times \mathcal{Y}} ||f(x) -y||_2^2 dp(x,y)$ with $dp(x) = dp'(x)~\forall p,p' \in P$. Suppose also $\mathcal{H}$ contains the Bayes optimal model for each mixture of the source distributions $dp \in P$. Then $\lambda^* = \arg \min_{\lambda \in \Delta^P} DM(\lambda, \mathcal{H})$ iff $\lambda^* = \arg\min_{\lambda \in \Delta^{P}} \int_{\mathcal{X} \times \mathcal{Y}} \mathcal{L}\left(\sum \lambda_p f_p(x),y\right) dt(x,y)$. **Proof**: Recall that the Bayes optimal model for conditional cross entropy is $p(y|x)$ and for conditional MSE is $\mathbb{E}_{y \sim p(y|x)} y$. In both cases, letting $f_{\lambda}$ be the Bayes optimal for the mixture $dp_{\lambda} = \sum_{p\in P} \lambda_{p} dp$, and $f_p$ be the Bayes optimal for the individual sources, we have for cross-entropy and $\ell_2^2$: $f_{\lambda} = \frac{\sum_{dp \in P} \lambda_p f_p(x) p(x)}{\sum_{dp' \in P} \lambda_{p'} p'(x)}$ and in particular, when there is no covariate shift amongst the sources we have $f_{\lambda} = \sum_{dp \in P} \lambda_p f_p(x)$. Note that the Bayes optimal for unconditional CE is just $p(x)$, and so similarly $f_{\lambda}(x) = \sum_{dp \in P} \lambda_p f_p(x)$. With this formula for $f_{\lambda}$, we then have our Data Mixing objective reduces to just learning a linear model over $f_p(x)$. Specifically, plugging in $f_{\lambda} = \sum_{dp \in P} \lambda_p f_p(x)$ into Data Mixing gives Equation 1. > I do not understand why the confidence intervals in e.g. Figure 3 in the paper, with 3 trials, appear no larger and in some cases clearly smaller than the confidence intervals in the corresponding rebuttal figure with 5 trials (for example, the blue MixMin bar in Figure 3). Thus, I still question whether confidence intervals obtained from only 3 trials are meaningful. We have now conducted a Welch’s t-test between MixMin and the baselines for our 3 trial setup (on the generative loss results for Pythia-160M on PIQA using 1% to train proxy models) and found the MixMin improvements were statistically significant. All cases had a p value under $0.05$. 1) MixMin vs. Random Search: $p = 0.006$ 2) MixMin vs. RegMix: $p = 0.007$ 3) MixMin vs. Balanced: $p = 0.008$ 4) MixMin vs. Natural: $p = 0.025$ In the 5 trial setup, we see the p values are also reduced as expected (unlike the confidence intervals which seemed mostly the same). The 5 trial results are: 1) MixMin vs. Random Search: $p =0.0003$ 2) MixMin vs. RegMix: $p =0.00007$ 3) MixMin vs. Balanced: $0.0004$ 4) MixMin vs. Natural: $p = 0.0142$ Note these 5 trial hypothesis tests reuse data from the 3 trial test, so we need to be cautious when making claims with both. We will include tables with the p values for all our experiments (using 3 trials) in the Appendix in our revised draft.
Summary: This paper proposes MixMin, a simple but effective method for solving the data mixture coefficients in large language model pretraining. The authors identify that the bi-level optimization objective for solving data mixture is intractable. But luckily, such an objective tends to be convex when model classes become larger. Thus, such an objective can be effectively solved using a gradient-based approach. Some theoretical justifications are provided along with experimental evaluations for LLM pretraining experiments to justify the effectiveness of the proposed MixMin method. Claims And Evidence: The claim of MixMin is that it improves model performance (in terms of better benchmark evaluation scores) with marginal additional computing overhead. This claim is well supported by the empirical evaluations. Methods And Evaluation Criteria: Both the proposed method and evaluation criteria make sense and follow the common practice of the LLM-related research field. Theoretical Claims: I checked the proofs of Theorem 3.1 and Lemma 3.2; both seem to make sense. Experimental Designs Or Analyses: The experimental design and analyses make sense and are pretty thorough. Supplementary Material: I checked the proof and additional experiments in the supplementary material. Relation To Broader Scientific Literature: The proposed method can potentially improve the data curation and training recipe of large language models or foundation models in general. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The paper is well-written and well-motivated. - Optimizing data mixture for LLM pretraining is an impactful research direction. - The proposed method is intuitive and easy to understand. - Both theoretical and empirical evaluations have been provided to justify the performance of the proposed method. Weaknesses: - Only models under the parameter size of 1B have been evaluated. It's not clear if the method scales well to larger model sizes, such as around 7B. - It seems that the experiments directly use domains contained in SlimPajama. I wonder if the method also works for data domains like web crawl data, code, books, arXiv/Wikipedia, etc., which are more common in LLM pretraining practices. - It seems that the major computational overhead comes from training small LLMs on sub-data domains. I wonder how large the small-scale LLMs need to be and what can be done when there are many data domains. Other Comments Or Suggestions: Please see the ones in "Strengths And Weaknesses". Questions For Authors: Please see the ones in "Strengths And Weaknesses". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. We discuss specific questions below. > Only models under the parameter size of 1B have been evaluated. It's not clear if the method scales well to larger model sizes, such as around 7B. We agree it would be nice to extend our analysis to larger scale experiments, but at the moment, we do not have the computational resources to run those experiments. We did find MixMin’s relative performance to be stable over the two scales that we tested (160M, 410M), and also in non LLM domains (chemistry). Although it's far from conclusive, it suggests that the benefits of MixMin may continue to hold. >It seems that the experiments directly use domains contained in SlimPajama. I wonder if the method also works for data domains like web crawl data, code, books, arXiv/Wikipedia, etc., which are more common in LLM pretraining practices. SlimPajama domains already contain the mentioned categories. The seven domains in SlimPajama are: CommonCrawl, C4, Github, Books, arXiv, Wikepedia, and Stackexchange. We also tested MixMin on sources from a completely different domain (chemistry), with varying combinations of sets, and MixMin continued to perform well. > It seems that the major computational overhead comes from training small LLMs on sub-data domains. I wonder how large the small-scale LLMs need to be and what can be done when there are many data domains. On the question of many domains, in our chemistry experiments we tested MixMin across many sets of sources, ranging from 9 to 1280 sources. We found MixMin improved as the number of sources grew. In terms of complexity, we found MixMin was more efficient than past approaches. MixMin still needs to train a proxy model for each source which scales with the number of sources, but this is true of past approaches. Importantly, we found the proxy models for MixMin could be obtained cheaply by using smaller models and less data (see our LLM experiments where we used <1% of the cost of the full training run to compute all the proxy models). Further experiments could explore pushing the cost of data mixing smaller than what we tested (<1% of the full training run), perhaps to find when MixMin would break. We will acknowledge this open problem in a revised draft.
Summary: The authors propose a novel method (called MixMin) for the problem of optimizing data mixtures for pre-training of large ML models in order to improve performance in downstream tasks. The MixMin method proposed by the authors solves this optimization problem with the following approach: First, MixMin trains a cheap/small “proxy model” on each of the individual data sources. Then, MixMin learns the best mixture of the “proxy model” outputs for the target dataset. This problem of finding the best mixture of “proxy model” outputs is a convex optimization problem, so MixMin solves this problem using simple entropic descent. Finally, MixMin outputs the optimized mixture of “proxy model” outputs as the optimal mixture of the corresponding data sources used to train each “proxy model”. The resulting mixture of data sources found by this MixMin method can then be used to train the larger, complex ML model that we want to pre-train and apply to the downstream task. The authors show that the larger the ML model of interest, the better the convex optimization problem solved by the MixMin method approximates the true data mixing objective. Thus, for models that are sufficiently large, solving the true data mixing optimization problem effectively becomes a simple convex optimization problem using the MixMin method. The authors demonstrate empirically that using MixMin to find optimal data mixtures improves performance of models on downstream tasks in language modeling and chemistry. Empirical results show that MixMin consistently outperforms baseline approaches for optimizing data mixtures, and that MixMin can find good data mixtures with significantly smaller computational cost compared to other approaches. ## update after rebuttal As I stated in my comment below, the authors answered all of my questions in their rebuttal, and I agree with the author's proposal of adding the provided list of "weak conditions for when MixMin works" to the paper. I therefore maintain that this work should be accepted for all of the reasons stated in my initial review. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, I checked the proofs in Appendix Section A and they are correct as far as I am aware. Experimental Designs Or Analyses: Yes, I checked all experimental design setups and experimental results provided in the paper. All are valid as far as I am aware. Supplementary Material: Yes, I reviewed all supplementary material in the appendix. Relation To Broader Scientific Literature: Finding an optimal data mixture for machine learning pipelines is a very relevant problem to the broader ML research community because improving the data mixture used to train a model can significantly improve performance for downstream tasks. This problem is a particularly relevant problem for researchers who work on training and applying LLMs. The author’s proposed MixMin method for solving this problem is therefore relevant to the broader research community, especially since the authors provide convincing empirical evidence that MixMin consistently outperforms baseline approaches. Essential References Not Discussed: There are no essential references missing as far as I am aware. Other Strengths And Weaknesses: Strengths: Strength 1: Relevancy of problem setting: The problem of finding an optimal mixture of data to pre-train models efficiently is clearly a relevant problem that can improve performance of models in downstream tasks across a wide variety of applications and ML pipelines. Strength 2: Empirical results: The empirical results in the paper provide convincing evidence that the proposed MixMin method consistently outperforms other methods for finding optimal data mixtures across a variety of downstream tasks. This is further strengthened by the inclusion of performance on relevant, real-world chemistry tasks, and by the additional results demonstrating that MixMin can achieve good mixtures with smaller computation cost. Strength 3: Quality of Writing: The writing is very clear and easy to understand. Weaknesses: A minor weakness of this paper is that the novel methodological contribution is on the smaller side because the proposed MixMin method is a fairly simple approach to optimizing data mixtures by training proxy models and then optimizing a mixture with entropic descent. That being said, this is a very minor weakness because the authors show that the MixMin method works very well empirically, consistently outperforming other approaches and cutting computation cost. I therefore still recommend that this paper be accepted. Other Comments Or Suggestions: Typos: 1: Line 125/126: "filering" should be "filtering" 2: Line 209/210: "resuse" should be "reuse" 3: Line 572/573: "left hand sign" should probably be "left hand side"? Questions For Authors: Question 1: Can the authors provide any additional insight into when they expect MixMin to work well in practice? In what scenarios (i.e. what types of data sources, downstream tasks, etc.) do the authors expect that using these cheap proxy models and applying the MixMin method will be successful? In what scenarios do the authors expect that MixMin would not be useful (i.e. data sources with covariate shift)? It would be nice to have a concrete list of criteria for problems such that the authors think that MixMin would be useful. Such a list would allow practitioners to more easily determine if MixMin is applicable to their particular problem setting, so I think adding something like this would strengthen the paper. Question 2: Do the authors have any concrete ideas of how MixMin could be adapted in the future to handle data sources with covariate shift (i.e. image classification)? To be clear, I don’t think that concrete ideas for this are necessary to include in this paper, but I am intellectually curious. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback! We elaborate on questions raised in the review below, and will update the draft with the typos pointed out by the reviewer. > Can the authors provide any additional insight into when they expect MixMin to work well in practice? In what scenarios (i.e. what types of data sources, downstream tasks, etc.) do the authors expect that using these cheap proxy models and applying the MixMin method will be successful? In what scenarios do the authors expect that MixMin would not be useful (i.e. data sources with covariate shift)? If our proxy models are sufficiently close to Bayes optimal, then MixMin will find the near optimal mixture weights. This is if there is no covariate shift among our sources, and that we ultimately train another near Bayes optimal model on the mixture (i.e., big data and big model regime). Note, if the downstream task is not close to any mixture, then MixMin (or any data mixing) may not help performance on the downstream task. The main question is when are the proxy models “practically” sufficiently close to Bayes optimal for the MixMin optimization. Our language modeling experiments showed that using 1% for the proxy models is comparable to using 100% the compute of a full training run. We believe this suggests a general approach to selecting the compute for the proxy models–we could iteratively increase the proxy model compute and run MixMin and see when the mixture becomes stable across scale. Ultimately, we did not need to do this as the cheapest settings we tested already performed better than the baselines. > It would be nice to have a concrete list of criteria for problems such that the authors think that MixMin would be useful. We thank the reviewer for the suggestion of providing a list of criteria for using MixMin. Given our previous discussion we propose to add the following list to the paper: “(Weak) Conditions for when MixMin works: 1) The data sources do not have covariate shift (or we can change the learning problem to not have covariate shift) 2) We believe the downstream task is close to/contained within a mixture of our sources 3) Our final model class is very expressive (i.e., do not have significant architectural biases) 4) Our proxy model learnt some signal for each source (e.g., even with very little data): if we have flexible compute, consider iteratively using more compute for the proxy models until some stability in the mixtures is reached” > Do the authors have any concrete ideas of how MixMin could be adapted in the future to handle data sources with covariate shift (i.e. image classification)? To be clear, I don’t think that concrete ideas for this are necessary to include in this paper, but I am intellectually curious. For covariate shift, the primary bottleneck we hit was non-convexity. The formula for the Bayes optimal function of a mixture with covariate shift is a fraction with a linear model of the mixture weights in the numerator and denominator (see Appendix A.1 formula). When we naively ran entropic descent to optimize the mixture with this expression we hit instability, and couldn’t get convergence without carefully tuning the learning rate. Even then it was not clear to us if this was a global optima. In fact, in preliminary experiments directly optimizing the conditional p(A|Q) tasks for LLMs (which was non-convex given covariate shifts in the question strings) gave worse mixtures to our approach of making the task generative, i.e., learning p(Q,A). An analog of our approach, replacing p(y|x) tasks with p(y,x), could be done for image classification. People are currently training generative models to jointly learn the distribution of labels and images (multi-modal models). In this case MixMin (with no further modifications) can be used despite there being “covariate shifts” between the sources (i.e., image shifts). --- Rebuttal Comment 1.1: Comment: Thank you for answering all of my questions. I agree with the author's proposal of adding this provided list of "weak conditions for when MixMin works" to the paper. I maintain that this work should be accepted.
null
null
null
null
null
null
null
null
Oracle-MoE: Locality-preserving Routing in the Oracle Space for Memory-constrained Large Language Model Inference
Accept (poster)
Summary: This paper proposes a new MoE architecture, Oracle-MoE, to address the latency issues associated with deploying large language models (LLMs) on edge devices with limited memory. The key idea is to route tokens in a compact space, called the oracle space, which is derived from attention scores to maintain semantic locality across tokens. This approach reduces the frequent swapping of experts in and out of memory, which is a major cause of latency. The paper provides theoretical analysis and experimental results on various GPT-2 models and downstream tasks, demonstrating that Oracle-MoE achieves state-of-the-art inference speeds without compromising task performance. This work primarily targets the low throughput setting where "only a few activated experts <are required> to reside in memory for inference". Claims And Evidence: Claims with Clear and Convincing Evidence: - High latency due to expert swapping: The paper provides clear evidence of the high latency caused by expert swapping in memory-constrained MoE-based LLM inference. They show that 50-85% of the latency is due to I/O overheads from expert swapping. - Temporal inconsistencies in expert activations: The visualization of expert activations over a sequence of tokens clearly demonstrates the temporal inconsistencies, leading to frequent expert swapping. - Oracle-MoE reduces expert activation variations: The paper provides evidence that Oracle-MoE effectively reduces expert activation variations compared to the Switch Transformer. - Performance on downstream tasks: The results on various downstream tasks show that Oracle-MoE achieves similar or better performance compared to the Switch Transformer. Claims that Need Further Support: - Semantic locality: While the paper claims that tokens with higher mutual attention scores share similar high-level semantics, I am not convinced there is strong evidence of this. - Effectiveness of oracle space: The paper claims that the oracle space efficiently describes various high-level semantics and that routing in this space preserves semantic locality; again, there is now strong evidence of this. Overall, the paper presents a promising approach to address the latency challenges in memory-constrained MoE-based LLM inference. However, providing more detailed evidence and explanation for some of their claims would further strengthen their contribution. I'll also note that, if you're in the extremely memory-constrained MoE-based LLM inference setup, maybe an MoE is NOT the right solution. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria in the paper generally make sense for the problem of memory-constrained LLM inference on edge devices. They define an Oracle Space motivated by Semantic Grouping of tokens. They cluster tokens in to groups in the oracle space; and they use Expert Prediction to see how much they can accelerate async loading of experts. The chosen evals (Trivia QA, GLUE, MAG, Sci-Cite, XSum) are standard and relevant for evaluating the performance of LLMs on various downstream tasks; other evals could have been chosen, but these are ok. Some of the chosen evals are not the most standard in literature; hard to say what is / is not cherry picked. Memory-Latency Curve and First Token Latency are used to measure the trade-off between memory usage and inference latency; these are reasonable choices. The proposed methods and evaluation are good Theoretical Claims: I read theorem 1 and the skimmed the appendix proof. I have no strong statements about it. Experimental Designs Or Analyses: The experimental designs and analyses in the paper are generally sound and valid. They provide convincing evidence for the effectiveness of Oracle-MoE in reducing latency without compromising performance. However, as mentioned earlier, providing more details on certain aspects, such as semantic locality and expert prediction, would further strengthen the paper's contributions. Supplementary Material: skimmed the appendix proofs; didn't go into too much detail. Relation To Broader Scientific Literature: There are a lot of works (eg Mixtral) which look at expert assignment based on domains. These are not exactly the same as semantic meaning of tokens, but it should be mentioned and the difference explicitly noted from what you're doing. Essential References Not Discussed: not that i noticed Other Strengths And Weaknesses: Narrow scope of application If you're in a memory-constrained LLM inference setup, maybe you shouldn't be using MoEs. Other Comments Or Suggestions: none Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your professional review comments and suggestions. We will add them in the updated version. Figures mentioned can be accessed at: https://anonymous.4open.science/r/ICML2025REBUTTAL-E158/README.md _Q1_ Semantic locality: While the paper claims that tokens with higher mutual attention scores share similar high-level semantics, I am not convinced there is strong evidence of this. _A1_ Previous studies [A][B] on representation space analysis have shown that semantically similar samples exhibit higher similarity in their embeddings compared to semantically dissimilar ones, which is also widely validated in experiments with general-purpose large models. We corroborate this observation and further identify a more fine-grained similarity pattern: token representations encapsulate both high-level semantics and token identity semantics. Among tokens with the same identity, the embeddings of those that share the same high-level semantic meaning tend to be more similar. This pattern is consistently observed in various models, including widely used large models like DeepSeek-16B-2.8B and Qwen1.5-MoE-A2.7B, which are illustrated in Figure 2 in our paper and Figure a in the link. Theoretical insights into how attention mechanisms compute correlations between tokens using the inner product of query (Q) and key (K) vectors are also supported by existing studies[C][D][E][F]. The computation of attention scores involves first assessing token correlations through inner products of query (Q) and key (K) vectors, followed by normalization of these correlations via softmax, and finally allocating contextual information through value (V) vectors weighted by the normalized scores. Among which, the Q-K inner product effectively captures token similarity and reflects high-level semantic alignment, as visualized in Figure b,d in the link. [A] A Survey on Word Embeddings: From Shallow Models to Deep Learning (Goldberg, 2017) [B] Deep Learning for NLP and Speech Recognition" (Hinton et al., 2012) [C] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (Raffel et al., 2020) [D] Effective Approaches to Attention-based Neural Machine Translation(Luong et al., 2015) [E]Analyzing the Structure of Attention in Transformers (Kobayashi et al., 2020) [F]On the Expressive Power of Self-Attention Matrices (Valerii Likhosherstov et al.,2021) _Q2_ Effectiveness of oracle space: The paper claims that the oracle space efficiently describes various high-level semantics and that routing in this space preserves semantic locality; again, there is now strong evidence of this. _A2_ As mentioned above, Self-Attention, especially attention score, can reflect and amplify the high-level semantics across tokens in the same context. Therefore, oracle space, which is constructed by grouping tokens with high attention scores and extracting high-level semantics from them, can capture semantic locality well. To further demonstrate the generalizability of this conclusion, we constructed oracle space with DeepSeekMoE-16B\Qwen1.5-MoE-A2.7B on several datasets, as illustrated in Figure b-e in the link. We found that semantic group embeddings in these oracle spaces all preserve semantic locality well, with semantic group embeddings varying slowly and smoothly, showing the potential for scaling up and generalization. _Q3_ Narrow scope of application If you're in a memory-constrained LLM inference setup, maybe you shouldn't be using MoEs. _A3_ Pursuing the trade-off between performance and latency is a key topic in the field of LLM inference. The MoE structure, which scales up model performance without increasing the number of activated parameters, is the most edge-friendly model architecture. Recently, sparse MoE models like DeepSeek and QwQ have exhibited impressive ability, demonstrating the value of MoE edge deployment. Companies like Qualcomm have also started research concerning MOE edge deployment, which was presented at the NeurIPS 2024 enterprise section, illustrating the huge market value of this problem. It is believed that this technology will further increase the accessibility to LLMs at edge devices, better highlighting the advantages of MoE on edge devices. --- Rebuttal Comment 1.1: Comment: Thank you for the response. Overall, the rebuttal hasn't shifted my views uch. The additional evidence and clarifications provided by the authors make a stronger case for the validity and potential impact of Oracle-MoE, but I'm going to keep my current rating.
Summary: This paper proposes Oracle-MoE, which improves the MoE inference efficiency by exploiting semantic locality to reduce swapping demands. Claims And Evidence: Please see **Other Strengths And Weaknesses** Methods And Evaluation Criteria: Please see **Other Strengths And Weaknesses** Theoretical Claims: Please see **Other Strengths And Weaknesses** Experimental Designs Or Analyses: Please see **Other Strengths And Weaknesses** Supplementary Material: I have only reviewed Appendix B. Relation To Broader Scientific Literature: Please see **Other Strengths And Weaknesses** Essential References Not Discussed: Please see **Other Strengths And Weaknesses** Other Strengths And Weaknesses: **Strengths**: 1. The paper is easy to follow, with clear writing and presentation. 2. The authors provide detailed theoretical analysis to justify their method. 3. The evaluation results are good and comprehensive. **Weaknesses**: 1. The main concern I have with this paper is the issues of scalability. In the paper, the largest MoE models are 2B with 32 experts. However, the existing models like Mistral-8x7B [1] and DeepSeekMoE-16B [2] are much larger. It would be better if the authors could provide results on these models to ensure the scale-up and scale-out ability of the proposed method. 2. How would the number of sampled data in oracle space initialization impact the accuracy performance? Would this cause a potentially high overhead for larger models? 3. In terms of figure location, I suggest putting all figures on the top of the page instead of in between the texts (see Figure 2/3/4). [1] Mixtral of Experts, Arxiv 2024. [2] DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models, Arxiv 2024. Other Comments Or Suggestions: Please see **Other Strengths And Weaknesses** Questions For Authors: Please see **Other Strengths And Weaknesses** Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your professional review comments and suggestions. We will add them in the updated version. Figures mentioned can be accessed at: https://anonymous.4open.science/r/ICML2025REBUTTAL-E158/README.md _Q1_ The main concern I have with this paper is the issue of scalability. In the paper, the largest MoE model is 2B with 32 experts. However, the existing models like Mistral-8x7B [1] and DeepSeekMoE-16B [2] are much larger. It would be better if the authors could provide results on these models to ensure the scale-up and scale-out ability of the proposed method. _A1_ Considering limited training resources and time, we train a model following the setting of DeepSeekMoE-16B but with fewer parameters(3B): 12 MoE layers with 64 routed experts each, where hidden size is set to 1536 and expert intermediate size is set to 1024. Top 6 experts are selected for each token. Our method still achieved a 75% latency reduction at 2.5GB memory. Meanwhile, our model maintains the performance of downstream tasks. On Trivia QA, our model achieves an F1 Score of 50.20, compared to the baseline of 50.75. On XSum, our model attains a ROUGE-1 score of 21.74, while the baseline score is 21.22. |Inference Latency|2.5GB|4.5GB|7GB|10GB|12GB(full)| |---|---|---|---|---|---| |Our Oracle Routing(s)|24.937|19.701|17.134|16.726|15.601| |Normal MoE(s)|113.828|89.082|44.013|27.058|15.601| _Q2_ How would the number of sampled data in oracle space initialization impact the accuracy performance? Would this cause a potentially high overhead for larger models? _A2_ As for the sampling overhead, the largest batch size we sampled in our experiments was 16382, and it only took 20 minutes for our 8*GTX3090 platform to process the sampling. Compared with 32.94 hours of training time, the sampling and oracle space construction contribute to only 1% of the overall wall-clock time, which is negligible. At the inference stage, our method introduces no inference overhead, thus there is no need to worry about larger models. _Q3_ In terms of figure location, I suggest putting all figures on the top of the page instead of in between the texts (see Figure 2/3/4). _A3_ Thank you for your suggestions. We will make such modifications to the layout of these pictures in the updated versions. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will maintain my score.
Summary: This paper presents Oracle-MoE, a novel Mixture-of-Experts (MoE) architecture aimed at efficiently deploying Large Language Models (LLMs) on memory-constrained edge devices. Current MoE models, despite theoretical advantages for memory efficiency, suffer from high latency during inference due to frequent swapping of experts in and out of limited device memory. The authors identify that these latencies stem primarily from temporal inconsistencies in expert activations between consecutive tokens. To address this, Oracle-MoE introduces a new routing mechanism that groups tokens based on semantic similarity, routed within a compact "oracle space" defined by semantic group embeddings. This method significantly reduces the variability of expert activations, thereby minimizing expert swapping overhead and improving inference speed on devices with limited memory. Claims And Evidence: The Claims And Evidence are clear and well supported. Methods And Evaluation Criteria: The paper proposes routing based on semantic embeddings derived from attention score clustering. In general it's innovative and well justified. However Oracle-MoE initialization requires a warm-up stage and clustering analysis (e.g., via K-means and SVD), which introduces complexity. Evaluation is done primarily on the NVIDIA Jetson Xavier NX, which could limit the scope of edge devices. Broader evaluations could enhance confidence in the method’s universality. Theoretical Claims: The Theoretical Claims are well justified. Experimental Designs Or Analyses: The Experimental Designs Or Analyses are sound. Supplementary Material: I did not check the math of the supplementary material. Relation To Broader Scientific Literature: This is going to have most impact to the budget-constrained inference domain. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths - The paper clearly identifies a practical limitation—high latency from expert swapping—providing insightful analysis into its underlying cause (temporal inconsistency). - The proposal of routing tokens based on semantic locality rather than token-specific embeddings is innovative and well-motivated - The authors offer a robust theoretical framework around Consecutive Semantic Difference (CSD) - A detailed breakdown of latency composition provides clear empirical insights into exactly how Oracle-MoE improves performance over baseline strategies. - Oracle-MoE is novel in its introduction of an oracle-space routing approach, leveraging semantic locality in attention scores Weaknesses - The paper could benefit from further discussions on practical deployment considerations and real-world constraints, such as more diverse hardware scenarios beyond the NVIDIA Jetson platform, like A100s and H100s. - The concept of predicting expert activations in deeper layers based on shallow layers is promising, but the rationale and robustness of the 85%-95% prediction accuracy are left for future exploration. More details on this approach would strengthen the work significantly. - While semantic locality is effectively leveraged, the paper does not deeply investigate scenarios where semantic locality is minimal (highly diverse or abrupt topic changes), potentially limiting generalizability. - The paper did not discuss cases of using fine-grained experts ( e.g., number of experts > 128, like the ones in DeepSeek's model). Other Comments Or Suggestions: "Lanuge" -> Language in title Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your professional review comments and suggestions. We will add them to the updated version. The figures mentioned can be accessed at: https://anonymous.4open.science/r/ICML2025REBUTTAL-E158/README.md _Q1_ The paper could benefit from further discussions on practical deployment considerations and real-world constraints, such as more diverse hardware scenarios beyond the NVIDIA Jetson platform, like A100s and H100s. _A1_ Results on A100s are listed below. Our method still speeds up inference by 50%~350%. |Model size|Memory budget|Switch(FIFO)|Switch(LRU)|Switch(SwapMoE)|Ours(FIFO) |---|---|---|---|---|---| |9*24(2.06B)|1GB|16.613s|16.066s|15.900s|4.012s| ||2GB|13.762s|15.229s|12.751s|3.630s|| ||4GB|11.165s|11.973s|9.754s|3.529s|| ||7GB|6.688s|7.115s|6.135s|3.410s|| ||Full Memory|3.252s|3.252s|3.252s|3.252s|| _Q2_ The concept of predicting expert activations in deeper layers based on shallow layers is promising, but the rationale and robustness of the 85%-95% prediction accuracy are left for future exploration. More details on this approach would strengthen the work significantly. _A2_ We predict by training a linear classifier with representations in shallow layers as input and routing results in deep layers as target labels. It turns out that such a linear classifier can reach an accuracy of 85%\~95% in our structure and 40%\~60% in existing MoE structures. More fine-grained observations show that layers that are closer predict each other well, e.g., representations in layer 10 predict routing results in layer 12 better than that of layer 5, and layers that predict each other well show a grouped pattern. We have been studying this phenomenon and believe this can be attributed to residual connections maintaining semantics across layers to some extent. |Avg Prediction Acc from|Qwen|Switch(Ourtrained)|Oracle(Ourtrained)| |---|---|---|---| |Layer 0|48.23|50.98|89.66| |Half of the model layers|55.59|59.72|92.82| |3/4 of the model layers|63.27|68.68|95.51| _Q3_ While semantic locality is effectively leveraged, the paper does not deeply investigate scenarios where semantic locality is minimal (highly diverse or abrupt topic changes), potentially limiting generalizability. _A3_ We tested scenarios where the topic changes frequently. We randomly sample sentences from different datasets and combine them into a whole sequence. We observed that our proposed oracle space can still distinguish semantic groups efficiently, both in our models and public large MOE models (DeepSeek-16B-2.8B\Qwen1.5-MoE-A2.7B), as shown in Figure g,h,i,j in the link. We also tested the expert activation variation of such highly diverse data with Oracle-MoE and switch-transformer. On average, in every 100 consecutive token generations, Oracle-MoE only changes 12.20 times while the switch transformer changes 90.54 times. This is because in human natural language, it takes at least dozens of tokens to express a complete meaning so our method still benefits from such "abrupt" semantic locality. _Q4_ The paper did not discuss cases of using fine-grained experts ( e.g., number of experts > 128, like the ones in DeepSeek's model). _A4_ Considering limited training resources and time, we train a model following the setting of DeepSeekMoE-16B but with fewer parameters(3B): 12 MoE layers with 64 routed experts each, where hidden size is set to 1536 and expert intermediate size is set to 1024. The top 6 experts are selected for each token. Our method still achieved a 75% latency reduction at 2.5GB memory. Meanwhile, our model maintains the performance of downstream tasks. On Trivia QA, our model achieves an F1 Score of 50.20, compared to the baseline of 50.75. On XSum, our model attains a ROUGE-1 score of 21.74, while the baseline score is 21.22. |Inference Latency|2.5GB|4.5GB|7GB|10GB|12GB(full)| |---|---|---|---|---|---| |Our Oracle Routing(s)|24.937|19.701|17.134|16.726|15.601| |Normal MoE(s)|113.828|89.082|44.013|27.058|15.601| _Q5_ typo "Lanuge". _A5_ We will correct the typos in the updated version.
Summary: The paper introduces Oracle MoE, a novel Mixture-of-Experts (MoE) architecture designed specifically for memory-constrained inference on edge devices. The main idea is to replace conventional token level routing with an oracle-space routing mechanism that leverages semantic locality. By grouping tokens based on their high level semantic similarity (extracted via attention scores) and using compact semantic group embeddings (the “oracle space”), the method reduces the frequency of expert swapping and thus significantly lowers inference latency. Extensive experiments on GPT 2 based models across various sizes and downstream tasks show that Oracle MoE achieves state of the art latency improvements while maintaining competitive task performance. Claims And Evidence: Some claims are supported by clear and convincing evidence. However, there are some important claims are problematic. For example, (1) the temporal inconsistencies of inter-token expert activation are not clear. The authors only show visualization results for a specific sample. It would be better to measure the temporal inconsistencies for the whole dataset and different layers. (2) The key assumption for semantic locality, i.e., linguistic meaning between consecutively generated tokens is typically consistent, is not supported by any evidence. (3) Can the authors explain more about why the mapping of Q/K/attention score will group consecutive tokens with similar semantics? Why does this happen for different layers/samples? Methods And Evaluation Criteria: The proposed method makes sense to reduce latency via eliminating massive swapping demands. Theoretical Claims: I don’t check the theoretical claims carefully. Experimental Designs Or Analyses: Yes. I check the experimental design and analysis. Although the authors provide memory-latency results and demonstrate the overall performance for QA, classification and summarization tasks, there are several important experiment are missing: (1) The experimental result on the temporal inconsistency of inter-token expert activation is missing. It would be more convincing to provide a metric to quantitatively measure the temporal inconsistency and show how Oracle-MOE reduces this inconsistency. (2) The semantic locality, as a key assumption, is not well explained. It would be better to provide some preliminary experiments to show the evidence in real datasets. It would be interesting to see the semantic locality across different models/layers/samples. (3) There are some approximations in the derivation of oracle-MOE, such as CSD_token in Line 147, page 3, CSD_oracle in Line 231, page 5. It would be better to validate such approximation in the experiments design. (4) Ablation study on CSD and hyperparameter study of $\gamma$ are missing. How does $\gamma$ influence the proposed algorithm? Supplementary Material: I don’t read the supplementary material carefully. Relation To Broader Scientific Literature: The key contributions can accelerate the MOE model inference in edge devices scenario. Essential References Not Discussed: Yes. Other Strengths And Weaknesses: Strengths: 1. The key idea of maintaining the semantic locality across consecutive tokens is very simple and efficient in terms of latency. 2. This paper is well-organized and easy to read. The paper first shows the latency bottleneck on expert swapping for memory-constrained scenario, and then formulates latency optimization problem with a simple yet effective solution. The overall organization is easy to follow and smooth. 3. The authors demonstrate the effectiveness of the proposed method across different tasks and provide some visualization results (such as attention, semantic group embedding) to support some claims. Weakness: (1) The experimental result on the temporal inconsistency of inter-token expert activation is missing. It would be more convincing to provide a metric to quantitatively measure the temporal inconsistency and show how Oracle-MOE reduces this inconsistency. (2) The semantic locality, as a key assumption, is not well explained. It would be better to provide some preliminary experiments to show the evidence in real datasets. It would be interesting to see the semantic locality across different models/layers/samples. (3) There are some approximations in the derivation of oracle-MOE, such as CSD_token in Line 147, page 3, CSD_oracle in Line 231, page 5. It would be better to validate such approximation in the experiments design. (4) Ablation study on CSD and hyperparameter study of $\gamma$ is missing. How does $\gamma$ influence the proposed algorithm? Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your professional review comments and suggestions. We will add them in the updated version. Figures mentioned can be accessed at: https://anonymous.4open.science/r/ICML2025REBUTTAL-E158/README.md _Q1_ It would be better to measure the temporal inconsistencies for the whole dataset and different layers. It would be more convincing to provide a metric to quantitatively measure the temporal inconsistency and show how Oracle-MOE reduces this inconsistency. _A1_ We propose temporal activation inconsistency, defined as the average number of inconsistent expert activations per 100 consecutive tokens per expert. Results over the entire dataset and across different models and layers are listed below. Existing MoEs show strong temporal activation inconsistency within all layers, while Oracle-MoE reduces this. |Activation inconsistency|DeepSeek|Qwen|Switch|Oracle| |--|--|--|--|--| |1st 1/4 layers avg|80.84|81.56|69.20|6.03| |2nd 1/4 layers avg|65.35|71.04|64.87|4.82| |3rd 1/4 layers avg|70.68|75.37|53.36|4.20| |4th 1/4 layers avg|76.61|77.16|75.44|5.11| _Q2_ It would be better to provide some preliminary experiments to show the evidence of semantic locality in real datasets, .. across different models/layers/samples. Experiments with DeepSeekMoE-16B and Qwen1.5-MoE-A2.7B on real chat datasets(Wizard-of-Wikipedia and Synthetic-Persona-Chat) are shown in Figure a-e in the link. Semantic locality appears across different models/layers/samples. Semantic groups can still be distinguished based on attention score and obtained by our method. It indicates the potential of Oracle-MoE being a general-purpose solution. _Q3_ Can the authors explain more about why the mapping of Q/K/attention score will group consecutive tokens with similar semantics? Why does this happen for different layers/samples? _A3_ Previous studies [A][B] have shown that semantically similar samples exhibit higher embedding similarity than semantically dissimilar ones, widely validated in experiments with large models. We corroborate this observation and identify a more fine-grained pattern: token representations encapsulate high-level and token identity semantics. Among tokens with the same identity, embeddings of those sharing the same high-level semantics tend to be more similar. This pattern is consistently observed in models like DeepSeek-16B-2.8B and Qwen1.5-MoE-A2.7B, as illustrated in Figure 2 in the paper and Figure a in the link. Theoretical insights are also supported by [C][D]. Computing attention scores involves first assessing token correlations through inner products of Q and K vectors, normalization via softmax, and allocating contextual information through V weighted by the normalized scores. Among these, the Q-K inner product effectively captures token similarity and reflects high-level semantic alignment, as visualized in Figure b,d in the link. [A] A Survey on Word Embeddings: From Shallow Models to Deep Learning (Goldberg, 2017) [B] Deep Learning for NLP and Speech Recognition (Hinton et al., 2012) [C] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (Raffel et al., 2020) [D] Analyzing the Structure of Attention in Transformers (Kobayashi et al., 2020) _Q4_ There are some approximations in the derivation of oracle-MOE. It would be better to validate such an approximation in the experimental design. _A4_ Experiments validating this approximation are listed below: for token pairs with the distance between [x,y), we count on average how many inconsistent expert activations are triggered. There is a statistically positive correlation that the higher the distance between tokens, the more likely they activate different experts. More results are in Figure f in the link. |Embedding L2 Distance|[0.0,22)|[22,33)|[33,44)|[44,55)|[55,66)|[66,77)|[77,88)|[88,+∞)| |---|---|---|---|---|---|---|---|---| |Inconsistent activation per token|0.0|0.083|0.222|0.377|0.576|0.770|0.878|0.903| _Q5_ Ablation study on CSD and hyperparameter study of γ are missing. How does γ influence the proposed algorithm? _A5_ CSD and γ are key metrics we defined for the optimization problem of minimizing the model's consecutive inconsistent activation while maintaining its task performance above γ. In our paper, we design oracle space routing to reduce CSD while maintaining γ. We found that the number of MoE layers and experts per layer can influence CSD and γ as is listed in the table below. CSD is given as the average number of inconsistent expert activations per 100 tokens in all layers and γ is given as the average of all tasks tested. Our model has a larger γ and less CSD, performing consistently well at large expert numbers, demonstrating our method's robustness to hyperparameters. |Model Scale|2*4(192M)|4*8(295M)|8*16(729M)|9*24(2.06B)| |--|--|--|--|--| |our model's outperform baseline's γ by(larger better)|+0.02|+0.60|+0.49|+0.36| |our model reduces CSD by(smaller better)|-25.10|-53.94|-71.23|-85.41| --- Rebuttal Comment 1.1: Comment: Thanks for the detailed and thoughtful response. I have increased my score to 3 accordingly.
null
null
null
null
null
null
Less is More: Federated Graph Learning with Alleviating Topology Heterogeneity from A Causal Perspective
Accept (poster)
Summary: This work proposed a causal subgraph learning method for graph federated learning. The work consists of three critical components. First, the edge evaluator separates the local subgraph into causal subgraph and biased subgraph. Second, the dual-GNN is developed to encode the corresponding subgraphs. Third, the separability of causal and biased subgraphs is enhanced by HSIC. The experimental results on six graph datasets demonstrate the proposed method’s superiority. Claims And Evidence: Yes, the authors claimed that mining a few of critical causal factors is more effective than introducing new global information, which is reasonable, and experimental results also demonstrated the validity of this argument. Methods And Evaluation Criteria: Yes. This work adheres to the perspective that mining causal subgraphs is more conducive to reducing heterogeneity in graph federated learning, which is a novel viewpoint for this field. Furthermore, the experimental setup and evaluation criteria follow the classic works in graph federated learning, which is reasonable. Theoretical Claims: The proposed Theorem 1 shows the upper bound for a graph federated learning system. However, similar generalization analyses have appeared in many domain adaptation and domain generalization efforts, so it is difficult to see a difference between them. Experimental Designs Or Analyses: Yes. I have checked the experimental designs and analyses. The authors adopted the Louvain method to split the graph into multiple private subgraphs, and compared the proposed methods with nine typical algorithms. The designs are well thought out. However, the analysis about experimental results needs to be improved, such as hyperparameter study. Supplementary Material: I reviewed the whole appendix. Relation To Broader Scientific Literature: This work proposed a decoupled graph representation learning method in the scenario of federated learning, which might be inspired by literature [1]. The proposed Theorem 1 in this work is drawn from literature [2]. [1] Ma J, Cui P, Kuang K, et al. Disentangled graph convolutional networks, ICML, 2019. [2] Muandet K, Balduzzi D, Schölkopf B. Domain generalization via invariant feature representation, ICML, 2013. Essential References Not Discussed: Refer to ‘Relation to Broader Scientific Literature’ Other Strengths And Weaknesses: Strengths: (a) This work proposed a graph decoupling method for graph federated learning, which is significantly different from the existing works. The idea is inspiring, and the writing is good. (b) This work demonstrates the effectiveness of the proposed method from two aspects. First, the work conducts a theoretical framework to show the effectiveness. Second, extensive experiments are conducted to prove the claims. Weaknesses: (a) In Fig.3, the authors claim to capture the causal and biased factors of local subgraphs, but do not explain clearly what a causal factor is and what a biased factor is for local subgraphs. (b) The work proposes a generalization analysis for graph federate learning, which has frequently presented in domain adaption and domain generalization. Hence, the differences between them must be explained. (c) The computation complexity of the proposed methods must be analyzed. (d) The simulation environment should be further introduced in details. (e) The font of some figures is too small, it’s suggested to adjust them. Other Comments Or Suggestions: See the weaknesses. Questions For Authors: See the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Many thanks for your valuable feedback, we carefully reply your concerns as follows. **Response to Weakness(a):** The topology of a graph has a significant influence on node embedding. However, inspired by causal learning, we argue that only the critical topological information is a direct determinant for the class of node representations, which is called the causal factor, while the remaining topological information is called the biased factor. **Response to Weakness(b):** Admittedly, the general form of the algorithm's generalizability analysis has its roots in domain generalization theory. Indeed, most studies on domain adaptation and domain generalization further analyze the distribution discrepancy $d_{\mathcal{H}\Delta \mathcal{H}}(\tilde{\mathcal{D}}_s,\tilde{\mathcal{D}}_t)$. Similarly, we analyze the term for the upper bound, but for graph data, it is necessary to consider not only differences in feature distributions, but also differences between topological distributions. Therefore, we perform a customized upper bound analysis, which is not available in the existing work on federated graph learning. **Response to Weakness(c):** The computational complexity of FedATH mainly comes from the computation process on the clients and consists of three components. Let us take the computation of a client as an example. First, the forward propagation of dual-GCN costs $O(|E|L)$, where $|E|$ denotes the number of edge and $L$ denotes the dimension of embedding. Second, the computation of edge evaluator costs $O(|E|d)$, where $d$ denotes the feature dimension of nodes. Third, the calculation of HSIC criterion takes $O(N^3)$. Overall, the computational complexity of the proposed FedATH is $O(|E|L+|E|d+ N^3)$. **Response to Weakness(d):** We use Python and PyTorch to conduct the simulation environment, and perform the experiments of FedATH and compared experiments on a server equipped with an Intel(R) Core(TM) i9-10980XE processor, an RTX 3090 GPU, and 128 GB of RAM. Adam is selected as the optimizer, and the learning rate is set as 0.001. The numbers of communication rounds and local training epochs are fixed as 100 and 3, respectively. **Response to Weakness(e):** We will optimize the figures in the following revision for a better presentation. **Response to Essential References Not Discussed:** Study [1] proposes to decouple graph data into features of multiple potential subspaces through a new neighborhood routing mechanism, which mainly aims to mine the decoupling factors behind the data and enhance the interpretability and generalization of the model. However, this mechanism may not be suitable for solving topological heterogeneity problems in federated graph learning, since the separation of multiple decoupling factors is computationally and communication-unfriendly, and it is difficult to judge which factor is really needed to be shared by all clients. In contrast, the proposed FedATH decouples the node representations from the topology level and only shares the causal networks, which is computationally and communicatively acceptable. Study [2] proposes a kernel-based optimization method to reduce the discrepancy between the source and target domains and proves it theoretically. However, as described in Response to W(b), for federated graph learning, the effect of topology on the generalization error needs to be taken into account, and thus we tailor an analytical approach that is not found in the exsiting studies. [1] Ma, J., Cui, P., Kuang, K., Wang, X., Zhu, W. Disentangled Graph Convolutional Networks. ICML, pp. 4212-4221, 2019. [2] Muandet, K., Balduzzi, D., Schölkopf, B. Domain generalization via invariant feature representation. ICML, pp. 10-18, 2013.
Summary: To address the topology heterogeneity of FGL, the authors proposed an interesting idea, namely, Less is More. Concretely, the unnecessary edges are discarded while the necessary edges are maintained. The CE loss and NE loss are separately used to train the corresponding GNNs. The HSIC loss is adopted to enforce the training of the dual-GNN. From both theorical and experimental perspectives, the authors verify the superiority of the proposed FedATH. ## update after rebuttal After the Reviewer-author discussion phase, I maintain my score and explicitly support acceptance. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, the traditional FGL methods tend to create an extra global information for correct the local training, while the presented idea goes the other way and advocate discarding the unnecessary edges. Theoretical Claims: Theorem 5.1 and Corollary 5.2 give the generalization bound for FGL, the proof is complete, which well supports the soundness of the proposed FedATH. Experimental Designs Or Analyses: In order to verify the effectiveness of the proposed method, the authors conducted a large number of experiments, and the experimental design and analysis are fully considered. Supplementary Material: I have read the appendix. Relation To Broader Scientific Literature: The proposed FedATH aims to reduce the topology heterogeneity of FGL [1,2,3], discarding the unnecessary edges while maintaining the necessary edges. Compared to the existing literature, the proposed FedATH is well motivated and can make sense of the FGL field. [1] Chen F, Li P, Miyazaki T, et al. Fedgraph: Federated graph learning with intelligent sampling. IEEE Transactions on Parallel and Distributed Systems, 2021, 33(8): 1775-1786. [2] Xie H, Ma J, Xiong L, et al. Federated graph classification over non-iid graphs. Advances in neural information processing systems, 2021, 34: 18839-18852. [3] Tan Y, Liu Y, Long G, et al. Federated learning on non-iid graphs via structural knowledge sharing, Proceedings of the AAAI conference on artificial intelligence. 2023, 37(8): 9953-9961. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strength: First, this paper proposes a new idea that discarding unimportant edges is more conducive to alleviating topological heterogeneity in federated graphs, which is different from the existing works. Second, many experiments are conducted on real-word graph datasets, the results demonstrate that the proposed FedATH outperforms other methods. Major Weaknesses: 1) The proposed FedATH is compared with nine FL algorithms, only four of which are tailored to the distributed graphs. At present, there are many FGL studies. Therefore, it’s suggested to compare FedATH with more FGL methods. 2) The authors split a complete graph into multiple subgraphs via Louvain algorithm. When the training of FL finishes, how do the author test the algorithms? Are they performed on the original graph? The details of test period should be explained. Minor Weaknesses: 1) There are too many symbols in the paper, a Table is needed for summarizing them. 2) Some typos and grammatical errors still appear, the authors should carefully check the paper. 3) The format for several figures is required to be changed, especially for the text size. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your valuable comments, the detailed responses are provided as follows. **Response to Weakness1:** We have collected two new federated graph learning methods FGGP [1], FedGTA [2], and compared them with the proposed FedATH. The experimental results are reported in the following table, it can be observed that the proposed FedATH is more superior than the new FGL methods. | Method | Cora (K=10) | Cora (K=15) | Cora (K=20) | Photo (K=10) | Photo (K=15) | Photo (K=20) | |----------|-------------|-------------|-------------|--------------|--------------|--------------| | FGGP [1] | 73.96 | 71.75 | 65.84 | 88.56 | 86.27 | 84.25 | | FedGTA [2] | 75.36 | 71.43 | 65.87 | 88.28 | 87.02 | 85.17 | | FedATH | **77.90** | **73.42** | **67.97** | **90.33** | **88.61** | **85.50** | [1] Wan, G., Huang W., Ye M. Federated graph learning under domain shift with generalizable prototypes. AAAI, pp. 15429-15437, 2024. [2] Li, X., Wu, Z., Zhang W., etc. FedGTA: topology-aware averaging for federated graph learning. VLDB, pp. 41-50, 2024. **Response to Weakness2:** In fact, when we divide the subgraphs to multiple clients, each client has its own training set and test set. Thus, after federated training, the global model is tested on each client and eventually the average performance of all clients is reported. **Response to Weakness3:** We summarize the meanings of main symbols in the following table. | Symbol | Meaning | |----------------------|------------------------------------------------------------------------------------------------------------------| | $G=(V, E, X)$ | A graph dataset, node set, edge set, feature matrix | | $G_k^c$, $G_k^b$ | The causal subgraph, the biased subgraph | | $A_k$, $H_k$ | The adjacency matrix, the latent embedding for the k-th client | | $C$, $K$ | The number of classes, the number of clients | | $N_k$, $N$ | The number of nodes for the k-th client, the number of total nodes | | $c_{ij}$, $w_{ij}$ | The importance score for edge $e_{ij}$, the normalized score | | $\Omega_k^c$, $\Omega_k^b$ | The causal edge mask, the biased edge mask for the k-th client | | $f_k^c$, $f_k^b$, $\Phi_k$ | The causal GCN, the biased GCN, the edge evaluator for the k-th client | | $\lambda$ | The trade-off parameter | **Response to Minor Weaknesses:** Thanks for the meticulous advice. We will improve the presentation in a revision version. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their response. I notice that the authors test the effect of the global model on the local test set at the end of the local training, while some algorithms are tested for performance before the local training starts, and different testing methods may also lead to differences in the results. The authors should compare the different testing methods. --- Reply to Comment 1.1.1: Comment: Thank you for this valuable comment. We test the performance at the beginning of a new communication round on Cora and Photo datasets, the ACC results are reported in the following table. It can be seen that the proposed FedATH still achieves the superior performance compared to the other FL and FGL methods. | Method | Cora K=10 | Cora K=15 | Cora K=20 | Photo K=10 | Photo K=15 | Photo K=20 | |------------|-----------|-----------|-----------|------------|------------|------------| | FedAvg | 75.78 | 70.88 | 63.09 | 86.82 | 68.86 | 77.00 | | FedProx | 76.49 | 71.74 | 64.75 | 87.34 | 68.70 | 77.06 | | MOON | 77.02 | 71.58 | 62.50 | 86.40 | 71.43 | 76.05 | | FedOPT | 74.91 | 67.63 | 60.35 | 86.25 | 72.12 | 75.81 | | FedProto | 75.43 | 70.46 | 63.94 | 88.31 | 71.34 | 78.40 | | FedSage+ | 76.67 | 72.64 | 65.29 | 87.35 | 74.00 | 78.61 | | FGSSL | 76.94 | 73.64 | 64.87 | 82.54 | 74.15 | 77.43 | | FedPUB | 76.78 | 72.17 | 66.98 | 87.54 | 72.30 | 79.60 | | FedTAD | 77.38 | 71.84 | 66.47 | **89.66** | 74.11 | 78.37 | | FedATH | **77.55** | **73.78** | **68.54** | 89.33 | **75.66** | **79.82** |
Summary: This paper proposed a reduced-edge based federated graph learning method that aims to mitigate the effects of topological heterogeneity on federated learning. Specifically, the proposed FedATH assesses the importance of each edge via an edge evaluator. Thus, the local subgraph is divided into causal and biased subgraphs. The causal subgraph is deemed to capture the essential information and perform the classification. Claims And Evidence: The claims of this paper are clear, and the experimental results support the claims. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense to the federated graph learning field. Theoretical Claims: I check the Theorem 5.1 and Corollary 5.2, I believe that they are correct. Experimental Designs Or Analyses: Overall, the experiment parts are complete, but I still have some concerns: 1. More analysis should be explained about the effects of sharing different components. 2. Large-scale clients are an important challenge for federated learning, and I would like to know how the proposed algorithm performs in the face of large-scale clients. Supplementary Material: I focused on the part of the proof about Theorem 5.1. Relation To Broader Scientific Literature: This paper follows the problem setting of literature [1] and [2] and aims to address the difficulties posed by topological heterogeneity for federated learning. However, unlike them, this work mitigates the problem of topological heterogeneity by reducing data information rather than increasing it. Furthermore, the proposed method adopts the HSIC criteria [3]. [1] Ke Zhang, Carl Yang, Xiaoxiao Li, Lichao Sun, and SiuMing Yiu. Subgraph federated learning with missing neighbor generation. [2] Yinlin Zhu, Xunkai Li, Zhengyu Wu, Di Wu, Miao Hu, and Rong-Hua Li. Fedtad: Topology-aware data-free knowledge distillation for subgraph federated learning. [3] Greenfeld, Daniel and Shalit, Uri. Robust learning with the hilbert-schmidt independence criterion. Essential References Not Discussed: I'm not aware of any additional literature that needs to be discussed. Other Strengths And Weaknesses: Strengths: 1. In terms of topological heterogeneity in federated graph learning, the authors provide new ideas for solving the problem by reducing data information rather than introducing new information. 2. It is remarkable that this paper theoretically guarantees that the proposed method can effectively solve the topological heterogeneity problem. 3. This paper is well-written and the presentation is excellent. Weaknesses: 1. The authors mention dividing this map into causal and biased subgraphs, however, their exact definitions are not made clear. 2. In fact, I don't fully understand what the authors mean by topological heterogeneity and how it differs from the common data heterogeneity of federal learning. 3. The questions about the experiments refer to the above ‘Experimental Designs and Analysis’. Other Comments Or Suggestions: N/A Questions For Authors: In addition to the above weaknesses, I still have one more concern: To decouple the causal graph and the biased graph, the authors encoded the two graphs locally using two GCNs. This process seems to bring higher computational cost, whether the proposed method consumes a much higher runtime than the comparison method? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your professional advices, the detailed responses are written as follows. **Response to Weakness1:** The difference between causal and biased graphs is whether they contain critical edge information. We use an edge evaluator to assess the importance of each edge. Subgraphs that contain information about important edges are called causal subgraphs, and subgraphs that contain information about the remaining edges are called biased subgraphs. **Response to Weakness2:** In traditional federated learning, the samples are isolated from each other and do not affect each other in model training, the model is only affected by differences in data label distribution or feature distribution. In contrast, for federated graph learning, the samples have connecting edges between them, and the sample representations are affected by connecting edges between different nodes. For example, the same class of nodes may connect different classes of nodes in different clients, then the representations minght be heterogeneous even for same classes of nodes in various clients. Therefore, topological heterogeneity is different from traditional label heterogeneity or feature heterogeneity. **Response to Weakness3:** First, to verify that sharing only causal networks is sufficient, we tested the algorithm performance when sharing different network components. In most scenarios, sharing only the causal network can achieve optimal results, which indicates that the causal factor is a key component for affecting the performance of the model. It should also be noted that sharing edge evaluators may also be beneficial to improve the performance, due to the fact that the fusion of edge information from different clients facilitates the quality of the evaluators. Second, the experimental results on Photo dataset with large-scale clients in the following table. We can see that the proposed FedATH still takes the leading position in the scenario of large-scale clients. | Method | Photo (K=50) | Photo (K=80) | Photo (K=100) | |-----------|--------------|--------------|---------------| | FedAvg | 84.20 | 83.11 | 82.38 | | FedProx | 84.23 | 83.11 | 82.41 | | MOON | 84.79 | 82.99 | 82.37 | | FedOPT | 80.56 | 78.78 | 79.16 | | FedProto | 75.04 | 74.16 | 72.20 | | FedSage+ | 84.88 | 83.44 | 82.98 | | FGSSL | 85.71 | 83.45 | 83.38 | | FedPUB | 83.69 | 82.18 | 79.75 | | FedTAD | 84.97 | 83.72 | 83.23 | | FedATH | **86.10** | **84.16** | **83.43** | --- Rebuttal Comment 1.1: Comment: The author addressed most of my questions, but I still have one concern, which is posted in “Questions for Authors”. --- Reply to Comment 1.1.1: Comment: Thanks for your insightful comment. We compare the running time (second) on Cora and Photo datasets for all compared methods. Admittedly, the running time of the proposed FedATH is increased due to the dual GCN coding structure and the computation of the HSIC loss, but it is well within acceptable limits. Especially on the Photo dataset, the running efficiency of our algorithm is better than the FedProto, FedSage+, and FedTAD. | Method | Cora K=10 | Cora K=15 | Cora K=20 | Photo K=10 | Photo K=15 | Photo K=20 | |------------|-----------|-----------|-----------|------------|------------|------------| | FedAvg | 12.36 | 18.08 | 23.63 | 12.23 | 18.89 | 23.36 | | FedProx | 15.01 | 22.46 | 30.72 | 15.28 | 22.73 | 30.36 | | MOON | 21.76 | 33.19 | 43.56 | 21.77 | 32.15 | 46.17 | | FedOPT | 12.73 | 18.06 | 23.95 | 12.91 | 18.76 | 24.39 | | FedProto | 49.74 | 56.38 | 62.68 | 112.01 | 125.97 | 131.15 | | FedSage+ | 20.02 | 32.03 | 42.86 | 78.27 | 118.02 | 142.35 | | FGSSL | 28.13 | 41.85 | 56.09 | 30.68 | 42.65 | 57.16 | | FedPUB | 16.95 | 26.10 | 35.71 | 17.63 | 25.99 | 36.12 | | FedTAD | 535.85 | 890.41 | 1315.30 | 679.24 | 1027.08 | 1513.65 | | FedATH | 51.03 | 75.41 | 86.42 | 55.91 | 79.57 | 104.40 |
Summary: The paper proposes to divide the local graph into causal subgraph and biased subgraph for alleviating the topology heterogeneity, the causal subgraph possesses the key information for the downstream task, and the biased subgraph possesses the confusing information. Thus, only the causal graph neural networks are shared in the server. The generalization bound for the proposed FedATH is derived, and the experimental results also support the authors’ argument. Claims And Evidence: The authors claim that less edges are more conducive to handling the topology heterogeneity, then the generalization analysis and experimental results support their claims. Methods And Evaluation Criteria: The proposed FedATH explores the local causal subgraph and improves the performance on the test datasets, making sensing of the domain of federated graph learning. Theoretical Claims: I checked the generalization analysis in Section 5 and the proof process in Appendix, I’m leaning towards them being right. Experimental Designs Or Analyses: Overall, the experimental section is ok. Supplementary Material: The appendix contains theorem proof and supplementary experiments, I have reviewed them. Relation To Broader Scientific Literature: The representation division [1][2] has been applied into some scenarios, it aims at capturing an invariant representation from the coupled representation. The proposed FedATH is also based on this theory. [1] On learning invariant representations for domain adaptation. In ICML 2019. [2] Self-supervised learning of pretext-invariant representations. In CVPR 2020. Essential References Not Discussed: No, the current literature discussion is adequate. Other Strengths And Weaknesses: Strengths: 1. In Figure 1, the authors visualize what topology heterogeneity is and the shortcomings of existing methods, showing that the proposed FedaATH is well motivated. 2. The authors provide a rigorous derivation of the generalizability analysis for the proposed FedATH, and the conclusion shows that mining causal subgraphs and abandoning biased subgraphs is beneficial to enhance the generalizability of the model. Weaknesses: 1. Corollary 5.2 illustrates that mining the causal subgraphs are beneficial for reducing the topology heterogeneity. Consider an extreme case when all edges are removed, whether the performance is optimal. 2. In the experiments, the FL algorithms are tested with up to 20 clients. However, more clients will be involved in the practical application. So, it’s necessary to conduct the experiments with more clients, at least 100 clients. 3. In related work, only FL and FGL are introduced. The causal learning is a key technique, a review about it is indispensable. Other Comments Or Suggestions: See above weakness. Questions For Authors: 1. The code of the proposed FedATH is unavailable, to validate its reproducibility, it’s suggested to release the code. 2. Whether the so-called topological heterogeneity is just a form of label heterogeneity on a graph, and what is the essential difference between the two. 3. In Eq. (9), is it reasonable to seek for an expectation for all nodes? Is it more appropriate to seek for an expectation for nodes that are only labeled? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your valuable comments, we response your concerns as follows. **Response to Weakness1:** We test the performance of several federated learning methods when all the edges are removed on Cora dataset. It can be seen that although topological heterogeneity causes a decline in federated learning, the topology still favors feature learning. Therefore, it is not the case that removing the topology guarantees that optimal performance will be achieved. We give an upper bound on the generalization error in the theorem. However, for practical purposes, a learning system has a lower bound on its generalization error, but the analysis of the lower bound is difficult to infer. | Method | Cora (K=10) | Cora (K=15) | Cora (K=20) | |-----------|-------------|-------------|-------------| | FedAvg | 62.57 | 64.99 | 61.26 | | FedProx | 64.22 | 65.43 | 62.47 | | MOON | 63.97 | 65.34 | 61.60 | | FedOPT | 62.75 | 64.61 | 62.17 | | FedProto | 60.72 | 62.21 | 60.17 | **Response to Weakness2:** Validating the adaptability of federated learning algorithms in large-scale client scenarios is necessary. We conduct the experiments with 50, 80, and 100 clients on Photo dataset. It can been seen that the proposed FedATH still outperforms other methods in the scenario of large-scale clients, demonstrating that it has good adaptability in face of more clients. | Method | Photo (K=50) | Photo (K=80) | Photo (K=100) | |-----------|--------------|--------------|---------------| | FedAvg | 84.20 | 83.11 | 82.38 | | FedProx | 84.23 | 83.11 | 82.41 | | MOON | 84.79 | 82.99 | 82.37 | | FedOPT | 80.56 | 78.78 | 79.16 | | FedProto | 75.04 | 74.16 | 72.20 | | FedSage+ | 84.88 | 83.44 | 82.98 | | FGSSL | 85.71 | 83.45 | 83.38 | | FedPUB | 83.69 | 82.18 | 79.75 | | FedTAD | 84.97 | 83.72 | 83.23 | | FedATH | **86.10** | **84.16** | **83.43** | **Response to Weakness3:** We have noticed that a review of causal learning is necessary, a review about causal learning is written as follows. Causal learning [1,2] aims to uncover causal factors that affects the data distributions and has gained widespread attention in machine learning. Mining causal factors is often used in invariant representation learning. For example, Lu et al. [3] aimed to discover the independent variables in the data under the nonlinear setting. [4] further disentangled the causal assumptions into invariant factors and nonstable factors. [5] proposed a causal variational autoencoder that captured the causal structures in the data. In recent years, to learn invariant representations in graph learning, some studies have also absorbed the idea of ​​causal learning. For instance, [6] explored a key subgraph from the initial graph to capture the most important information. [7] proposed a graph contrastive invariant learning to enhance the performance of graph representation. [8] introduced the graph autoencoder to learn the invariant graph representations across multiple domains. Inspired by the causal learning, we believe that the local graph consists of causal subgraph and biased subgraph, and sharing the causal information in them help enhance the generalization of the global model. [1] Pearl, J. Causality. 2009. [2] Causality for trustworthy artificial intelligence: status, challenges and perspectives. ACM Computing Surveys, 2025. [3] Invariant causal representation learning for out-of-distribution generalization. ICLR, 2022. [4] Invariant and transportable representations for anti-causal domain shifts. NIPS, 2022. [5] Causalvae: Disentangled representation learning via neural structural causal models. CVPR, 2021. [6] Learning causally invariant representations for out-of-distribution generalization on graphs. NIPS, 2022. [7] Graph contrastive invariant learning from the causal perspective. AAAI, 2024. [8] Learning causal representations based on a gae embedded autoencoder. IEEE TKDE, 2025. **Response to Question1:** We have released the code at https://anonymous.4open.science/r/FedATH-9517. **Response to Question2:** Label heterogeneity emphasizes that the label distribution for independent samples creates a bias. Topological heterogeneity, on the other hand, emphasizes more on the connecting edge relationship between nodes to produce variance. In other words, label heterogeneity can lead to topological heterogeneity, while topological heterogeneity may not be caused by label heterogeneity. **Response to Question3:** The purpose of Eq. 9 is to smooth the predictive distribution output by the biased network, not just for labeled nodes, but for all nodes. --- Rebuttal Comment 1.1: Comment: The authors have addressed my questions. I maintain my score and support its acceptance.
null
null
null
null
null
null
Sample, Scrutinize and Scale: Effective Inference-Time Search by Scaling Verification
Accept (poster)
Summary: The paper studies a minimalist implementation of test-time scaling that uses only random sampling and direct self-verification. The contributions include: 1. the paper shows that the sample-verification method is surprisingly effective, and it is beneficial to scale both number of solution samples per question and number of verification samples per solution. 2. the paper proposes two effective methods to improve self-verification: pairwise response comparison and response style rewriting. 3. the paper introduces a benchmark to evaluate the verification accuracy of LLMs. ## update after rebuttal I agree with the clarification of the main contributions. Therefore I raised my score to 3. However, I still think the implicit scaling is not reasonable and the dataset lacks diversity. Claims And Evidence: The paper claims the existence of implicit scaling of verification accuracy: increasing the number k of generated responses increases the conditional probability of at least one of the correct responses is of the highest verification score given that at least one response is correct (Pass@k=1). This is not intuitively or theoretically verified and only supported by the limited evidence on a single model. I note that the trend on the MATH benchmark in Figure 2 is downwards when k is larger than 80, contradicting the claim. Also, the figure omits questions that Consistency@200 answers correctly to reduce noise. It might also change the results. Methods And Evaluation Criteria: The idea of scaling number of verification samples is sensible to improve the verification accuracy. The two methods to improve self-verification also make sense. The benchmarks used to evaluate reasoning ability are AIME, MATH, and LiveBench. They are common benchmarks in mathematical reasoning. Including reasoning benchmarks in other areas like LiveCodeBench (for coding reasoning) and GPQA (for science reasoning) will be better. Theoretical Claims: No proofs in the paper. Experimental Designs Or Analyses: Yes. The experimental setting is simple and solid. Supplementary Material: No Relation To Broader Scientific Literature: The effectiveness of search against a verifier is already demonstrated in Snell et al., 2024. They also show that search against a verifier can improve accuracy when majority voting saturates. The verification benchmark is in fact a benchmark for reward model on mathematical reasoning. It is known that the accuracy of reward model is sensitive to the distribution of responses (Zheng et al., 2024). The benchmark consists of only responses from Gemini-1.5-pro. I think it is insufficient to evaluate frontier models. Snell, C., Lee, J., Xu, K., and Kumar, A. Scaling llm test-time compute optimally can be more effective than scaling model parameters, 2024. Zheng, C., Zhang, Z., Zhang, B., Lin, R., Lu, K., Yu, B., Liu, D., Zhou, J. and Lin, J., 2024. Processbench: Identifying process errors in mathematical reasoning. arXiv preprint arXiv:2412.06559. Essential References Not Discussed: The difference with Snell et al., 2024. is not sufficient discussed. Other Strengths And Weaknesses: I think the paper is a mixture of contributions to three rather independent directions, while the contribution to each direction is insufficient. First, the paper highlights the impact of scaling verification compute besides inference compute. However, it is obvious that increasing verification samples leads to higher accuracy, since verification is also a reasoning task and majority voting improves its accuracy. Since there are two scaling factors (number of solution samples per question and number of verification samples per solution) and they both require computing resources, the key problem is what is the optimal allocation between the two factors given a limited compute budget. Second, the paper proposes two effective methods to improve self-verification. However, their contribution to the end-to-end accuracy is not measured. Also, the pairwise comparison is only applied to the tie-break stage for three highest scoring candidates. Third, the paper introduces a benchmark to evaluate the verification ability of LLMs. The verification candidates in the benchmark are all sampled from Gemini-1.5-pro, which means that it cannot comprehensively evaluate verification accuracy on diverse solutions. Other Comments Or Suggestions: The authors should place section 4 (Technical Details) in the appendix and appendix A (Related Work) in the main paper. Questions For Authors: How is the threshold 0.05 selected in the algorithm? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. > The main contribution of the paper. We view our main contribution as showing that scaling search with self-verification works on frontier models and providing an explanation for why: implicit scaling. While it may seem obvious that “increasing verification samples leads to higher accuracy”, many papers (including Snell et al.) have studied self-consistency and self-verification and attempted to scale them up, but none have reported the same success. In fact, we aren’t aware of any prior works that have successfully applied self-verification to frontier models in a way that actually beats self-consistency. Instead, to beat self-consistency, the field has turned to training process-based reward models, using RL to learn verifiers, and other complicated interventions. “Increasing verification samples” sounds like a simple panacea but making it work is a different matter. This has led to a general belief that simple self-verification isn’t enough, whereas we show that it is—you just need to hit a certain threshold of scale and obey some important principles. It's also noteworthy that direct self-verification can successfully and reliably extract o1-level performance from a non-reasoning model without any finetuning, distillation, RL, or custom-trained verifiers (again, no other works/methods—verification-based or otherwise—have achieved this). We therefore respectfully, but strongly disagree with the assessment that “the key problem is what is the optimal allocation between the two factors given a limited compute budget”; prior to this work, it wasn’t even known that near-unbounded compute budget is sufficient to recover o1-level performance without interventions like RL. > “The difference with Snell et al., 2024.” Our paper differs from Snell et al. in several important ways. First, we study entirely different questions: Snell et al. asks what the optimal test-time scaling trade-off is between pretraining versus inference-time scaling; we ask what the scaling trends and limits of sampling with self-verification are. Second, we study qualitatively different regimes: making verification improve on self-consistency is easy when trying to improve e.g. MATH performance from 10-40% (as is done in Snell et al. 2024); making verification improve on self-consistency on the last 90% of MATH or on actually challenging benchmarks like AIME is significantly more difficult (we aren’t aware of any works prior to ours that were able to do so without RL). Third, Snell et al. 2024 trains a process-based verifier reward model; we show that direct self-verification works fine. > “The benchmark consists of only responses from Gemini-1.5-pro. I think it is insufficient to evaluate frontier models.” Gemini 1.5 Pro-002 was released in September 2024, making it one of the main frontier models at the time of this submission. Our paper also includes results on Gemini-1.5-Flash (a significantly smaller model with major qualitative differences) in Appendix B.1. We also note for context that the Snell et al. 2024 paper ran experiments only on the Palm-2 model and on a single benchmark (MATH). > Omit questions that Consistency@200 answers correctly to reduce noise. It might also change the results. On MATH for example, 98% of “ambiguous questions” are easy questions that Consistency@200 answers correctly and hence Verification@200 answers correctly. When we plot Figure 2 without omitting these questions, we end up with a line that goes from 98% to 99% which entirely misses the point of the plot (which is to highlight implicit scaling). To fulfill the reviewer’s request, we will include an analog of Figure 2 in the Appendix where Consistency@200 filtering is not done. However, no trends change. For example, on AIME, Consistency@200 only gets 1-2/15 questions correct—omitting those questions has no material impact on the trends observed in Figure 2. > MATH trend on Figure 2. Increasing the number of generated samples makes verification on ambiguous questions harder; this follows from basic probability: if you allow students more attempts at an exam, the average pass-rate of a student who passes at least one exam decreases. Similarly, as you increase the number of attempts, the average number of correct attempts/question decreases. Thus, if implicit scaling didn’t exist, *all* lines in Figure 2 should be strictly decreasing. The fact that we observe the opposite proves that implicit scaling must be happening. The fact that there is a 63->61% drop on the right-hand side of the MATH plot is not evidence against this and is—if anything—expected: we can’t expect implicit scaling is universally powerful enough to completely reverse the natural bias of increasing verification difficulty. --- Rebuttal Comment 1.1: Comment: About the benchmark, what I mean is that as a benchmark for verification it should cover diverse solutions, not limited to solutions from a single model. It is OK as empirical results to support the idea in the paper, but not enough as a standalone benchmark. I think the paper doesn't give a clear definition of implicit scaling. In the abstract implicit scaling is defined as "sampling a larger pool of responses in turn improves verification accuracy". How is the verification accuracy defined? It is pairwise accuracy given two randomly sampled response? Also, on line 80, the paper claims that implicit scaling is because well-written responses are easier to verify than poorly written responses, and scaling sampling widens the pool of well-written candidates. However, scaling sampling also widens the pool of poorly written responses. If you consider pairwise accuracy, it should be the same. --- Reply to Comment 1.1.1: Comment: **Benchmark.** Thanks for the clarification and apologies for misunderstanding. We agree that's a reasonable point to make; for example, verification performance on reasoning-model (o1/r1/gemini thinking) solutions may differ from those of base models. Our motivation for providing the benchmark was mainly to highlight some of the failure modes we encountered, and hence focused on 1.5 solutions. We are augmenting the dataset per your suggestions: we have already curated an additional set of accompanying entries for the Gemini 2.0-Flash Thinking model, and are currently procuring R1 scores as well. **Implicit Scaling.** We can define implicit scaling as follows: * Let $Q_k$ denote the questions on which at least one of the model's $k$ solutions are correct. * Let $x_k$ denote the fraction of questions in $Q_k$ where the verifier correctly identifies a correct solution from the $k$ candidates (verification accuracy). * Implicit scaling is where $x_k$ increases with $k$ in expectation. It's commonly believed that $x_k$ (verification accuracy) should decrease with $k$; this is provably the case if verifiers select randomly; you are indeed correct that this is also the case if you define verification accuracy as pairwise comparison accuracy. However, we show $x_k$ actually increases with $k$, even though larger $k$ does increase the pool of poorly written responses. Note that $x_k$ is what we truly care about with verification---we do use pairwise comparisons, but only as a means of improving $x_k$. We appreciate the feedback on clarifying "verification accuracy" and will add an extended form of this discussion to our text.
Summary: This paper studies the inference-time scaling of LLMs for reasoning tasks in a sampling-based search setting. The authors first study the test-time scaling along two important dimensions, search (number of sampled candidates) and verification (number of verification scores computed). While scaling in both axes improves the accuracy, the authors additionally discovered an implicit scaling where larger sampling pools actually increase the verification accuracy. Built upon their findings, this paper identified two principles to boost self-verification of LLMs, candidate pair comparison and response rewriting. Claims And Evidence: 1. I am confused about the claim "Frontier language models demonstrate a remarkable mismatch between the problem-solving capabilities and poor out-of-box verification capabilities". I am wondering how the "poor out-of-box verification capabilities" are proved in the paper using experiments. Methods And Evaluation Criteria: 1. The authors proposed Verification@k as a variant in sampling-based search to study self-verification in LLMs, which is sound and interesting. 2. The proposed two principles are well supported by experiments on the newly constructed benchmark. 3. Benchmark selection for MATH, AIME, and LiveBench is reasonable. Theoretical Claims: This paper does not contain theoretical claims and corresponding proofs. Experimental Designs Or Analyses: The experiment design in this paper is valid and reasonable. I have some additional suggestions for experiments in the following weaknesses/question section. Supplementary Material: I have reviewed all the appendices in the paper that contain the related work, more experiments with different model scales/sub-tasks, prompt contents, and case studies. Relation To Broader Scientific Literature: This paper provides another comprehensive investigation in the area of test-time computing, specifically scaling at inference time. One of the key contributions is the exploration of scaling with "verification". The authors propose a very intuitive "verification@k" algorithm, which could serve as baseline for follow-up works. This paper also offers insights that are not seen in previous works, and would be valuable to the community, such as the conclusion in "Pass@k v.s. Pass@1". The proposed verification dataset and the two design principles are also new and interesting. Essential References Not Discussed: The literature discussion/related work is thorough in my opinion. Other Strengths And Weaknesses: 1. I appreciate the author's reporting compute/token consumptions in Section 4. Given the expensive cost of the proposed verification scaling, I would suggest the authors to also report the token consumption of baselines (e.g., Consistency@k) as well for a more fair comparison. 2. The paper offers interesting insights for researchers and practitioners that may be beneficial for downstream tasks. For example, the insights on Pass@k v.s. Pass@1. 3. The paper is overall well-written and is a joy to read. Other Comments Or Suggestions: 1. Algorithm 1 may need some improvement to make it easier to understand for readers. For example, it took me some time to understand that "1[issicorrect?]" is actually a binary classification of the given query for verification. 2. This may sound a bit demanding, but I think the paper would be further benefited from adding analyses of R1/R1-Zero models. For example, the distilled version of R1, and analyze their behaviours w.r.t. sampling-based search. 3. Since the authors mention that "rewriting responses with increasing rigour and structure", I am not sure if the current implementation/design in Table 4 actually reflect this. Questions For Authors: See the above reviews. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review! We address your questions/comments below. > “I would suggest the authors to also report the token consumption of baselines (e.g., Consistency@k) as well for a more fair comparison”. We appreciate the suggestion and agree it would be useful. As our focus was identifying broader scaling trends rather than computational efficiency, we chose to study a minimalist implementation of sampling-based search that leaves significant room for optimization e.g. via prompt caching, delegated search, small verifiers, etc.—all exciting directions for future work as the reviewer has accurately noted. > Algorithm 1 We appreciate the feedback and will revise the algorithm description accordingly. The notation 1[is correct] would indeed benefit from clarification. > Adding analyses of R1/R1-Zero models. This is a great suggestion and something we hope to do. Unfortunately, since R1 was released only 3 days before the ICML deadline, we were unable to include an analysis in this submission but agree that it’s valuable to analyze R1’s sampling-based search trends. > On Table 4 Table 4 depicts the implications of ablating the rewriting step from the verification process, showing a significant drop. We include in Appendix 3 a few examples of the outcomes of the rewriting step, which illustrate the rigor and structure that result. We will add a more detailed discussion to this effect as well.
Summary: Overall Evaluation This paper investigates the scalability of sampling-based search methods in inference tasks and proposes a minimal yet effective Sampling-based Search with Self-Verification approach. The key contributions of this work include: 1. A systematic analysis of inference performance scaling with computation, revealing that both search scale (`k_inf`) and verification capacity (`k_verif`) jointly influence reasoning accuracy. The experiments demonstrate that verification plays a crucial role in improving inference reliability. 2. The introduction of the implicit scaling phenomenon (Implicit Scaling), showing that increasing the search scale unexpectedly improves verification accuracy—i.e., as the search space expands, the probability of generating high-quality answers increases, leading to overall performance gains. 3. An evaluation of the out-of-the-box self-verification capability of leading models, showing that GPT-4o and Claude 3.5 Sonnet perform only marginally better than random guessing in self-verification tasks, indicating a significant limitation in current LLM reasoning capabilities. The paper conducts large-scale experiments on Gemini v1.5 Pro, demonstrating the advantages of Verification@200 over Consistency@200 in complex reasoning tasks. Furthermore, it proposes Comparison-based Verification and Response Rewriting as optimization strategies to enhance model verification. Overall, the paper is well-structured, with a clear motivation, rigorous experimental design, and insightful conclusions. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: Enhance the trustworthy of LLMs Essential References Not Discussed: NO Other Strengths And Weaknesses: Comments and Suggestions - Limited application of Verification@200 on other LLMs: maybe due to constraints in computational resources, the authors did not apply Verification@200 to models such as GPT-4o, which would have provided a broader validation of the method's generalizability. It would be beneficial to explore its impact on multiple state-of-the-art models. - Positioning relative to alternative reasoning enhancement methods: While Verification@200 demonstrates performance gains, I am concerned about its computational cost compared to alternative reasoning frameworks such as Tree-of-Thoughts (ToT) and reinforcement learning-based approaches. Given the increasing focus on efficient inference, should Verification@200, which requires large-scale computation, be the primary direction for improving model reasoning? A comparative discussion with these alternative methods would strengthen the paper's positioning. Overall, the work presents a compelling approach to enhancing reasoning through scalable verification but would benefit from additional discussion on computational efficiency and comparisons with alternative reasoning paradigms. Other Comments Or Suggestions: No Questions For Authors: See the Comments and Suggestions Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your review! We address your questions below. > Computational cost compared to alternative reasoning frameworks such as Tree-of-Thoughts (ToT) and reinforcement learning-based approaches. Optimizing computational efficiency was not the main focus of this paper, which focuses on understanding the broader scaling trends underlying sampling-based search via self-verification. While we chose to study a minimalist implementation of sampling-based search, we expect our insights to generalize to alternative implementations that place a premium on computational efficiency. In fact, there are several avenues along which the effectiveness and efficiency of our minimalist sampling-based search implementation can be improved: prompt caching, delegated search, training smaller verifiers, etc.—all exciting directions for future work. Regarding ToT and RL, these are complementary techniques: you can always apply sampling-based search and self-verification to RL-trained models that use ToT for inference. As a result, comparing between two methods that can be used in concert imposes an unnecessary trade-off. > Application of Verification@200 on other LLMs. In Appendix B.2, we include results that apply Verification@200 to the significantly smaller Gemini 1.5 Flash model. Computational concerns were a limiting factor in the number of LLMs studied, but we find no reason for the trends we observed with 1.5 Pro and 1.5 Flash to not generalize to models like GPT4o.
Summary: This paper examines scaling test-time compute through a sampling and self-verification approach (“Verification@k"). The authors demonstrate that with sufficient sampling and self-verification, even standard models (Gemini v1.5 Pro) can outperform specialized reasoning models (o1-Preview), and Verification@k improves naturally with increased sampling --- whereas Consistency@k plateaus. The authors also propose a verification benchmark showing current large language models (LLMs) exhibit weak out-of-the-box verification abilities. Claims And Evidence: I’m skeptical of the author’s implicit scaling claim— that increasing the number of generated responses increases the quality of responses. It is unclear if increased generations improves generation quality or simply increases generation coverage [1], and thus even if you have a suboptimal verifier, it is more likely to select a solution that reaches a correct final solution. I don’t follow the authors’ claim that increasing generations will increase the verifier’s likelihood of error, given that pass@k is known to increase– thus, instead of focusing on ‘ambiguous’ questions in Figure 2, I believe it is more interesting to identify the number of instances where the correct solution appears very few (1-2 times) in the k_inf generations, but the verifier still selects the correct answer (the authors provide a singular example of this in Table 2, but there lacks further analysis on this). [1] Brown, Bradley, et al. "Large language monkeys: Scaling inference compute with repeated sampling." arXiv preprint arXiv:2407.21787 (2024). Edit: After discussions with the authors, I believe that while the results are not very surprising (ie. you need model scale and proper output formatting for verification to scale with inference generations) I am more sympathetic to the authors demonstrating that this is possible. I still believe the paper requires restructuring for clarity and more rigorous definitions of eg. implicit scaling, but authors have said they will implement these changes. Methods And Evaluation Criteria: Evaluation criteria is straightforward (correctness on the reasoning benchmarks considered). I'm seeking clarification for one part of the methodology: the authors propose a “Verification@k” process detailed in Algorithm 1, but upon reading the details in Section 3.1 the authors state that the candidate responses are rewritten in a format with Theorems/Lemmas/Proofs before being passed for self-verification. This is not obvious from the introduction, and this confounds the verification process with an additional step that may be aiding self-verification. There are not many details about how the rewrites are prompted, even though the authors do ablate over variations of this prompt. Theoretical Claims: N/A, there are no theoretical claims. Experimental Designs Or Analyses: See my comment in “Claims” regarding the experiments around ‘implicit scaling’. I believe the analysis regarding the accuracy rates relative to scaling number of generations and number of verifications could be improved with a better FLOPs comparison to judge the best allocation of verification and generation attempts; the authors mention compute for a fixed setting of the parameters on AIME in section 4, but this should be present across their results in Section 2. Supplementary Material: Yes, I reviewed the related work, additional experiments, details on the methods, prompts, and sample responses. Relation To Broader Scientific Literature: There is a growing trend of using additional computation at inference to enhance performance. The paper directly compares to prior scaling test-time methods like Consistency@k. There has also been recent attention in AI research aimed at making LLMs more trustworthy and less prone to errors. Self-verification is a prominent approach under exploration. Essential References Not Discussed: Authors have not cited related work [1] which was previously mentioned above. [1] Brown, Bradley, et al. "Large language monkeys: Scaling inference compute with repeated sampling." arXiv preprint arXiv:2407.21787 (2024). Other Strengths And Weaknesses: Careful studies of how search should be scaled at inference time and whether self-verification can be effectively leveraged is a relevant area of research. The authors present analysis on several standard math and reasoning datasets. The potential of self-verification for models to reach reasoning-model performance is promising. The observation that reasoning models have weak out-of-box verification capabilities has been previously reported (eg [1]). Further, I believe the paper is organized poorly to the point that it is confusing for a reader to follow the paper. The introduction presents a result (Table 1) before the methods are described, and only Algorithm 1 is given before the authors dive into more results in Section 2, only to loop back to talking about the ‘Sampling-Based Search Implementation’ in subsection 3.1 and technical details in Section 4. Further, the search method employed in Section 2 appears to not be the same as that in Algorithm 1; the tiebreaking was omitted, which I only understood in full after reading Line 275 in Section 3.2. I ask the authors to reorganize their paper such that the methods and setup are described before results are presented. Given my feedback about the exposition, the claims, and the insufficient analysis, I do not recommend acceptance of the paper in its current state. I am willing to raise my score if several points raised in my review are clarified, if the paper’s structure is reorganized, and some additional analyses are provided (see Questions). [1] Wu, Zhenyu, et al. "Large language models can self-correct with key condition verification." arXiv preprint arXiv:2405.14092 (2024). Other Comments Or Suggestions: No other comments, see below for questions. Questions For Authors: 1. Are the results in Section 2 doing the same process as in Section 3.1? In Step 2, it is mentioned that the scoring attempt is a rewrite of the response as a theorem, lemmas and proofs. How are these rewrites done? If this was done for Section 2, this should be explicitly stated because it is not clear from Algorithm 1 that this is what is done, and this confounds the verification process by presenting it in a standardized format which potentially makes verification easier. 2. Can the authors clarify their point about implicit scaling and how their results suggest higher quality of generations with multiple samples? 3. Do you have a FLOPs comparison of the different settings of k_inf and k_ver? It personally is not surprising to me that scaling up both k_inf and k_ver will show increases in performance, and the analysis feels incomplete without showing a FLOPs tradeoff between inference and verification generations to show, given a fixed compute budget, what the tradeoff should be between generation and verification. 4. On each benchmark, for the questions in which the number of solutions which reached the correct are very low (eg. 1-2 generations), what fraction does Verification@k get correct? 5. Could the authors clarify what is prompted and given to the model during self-verification for the 'main' prompt and each of the three variations? Why are the FP and FN rates missing for the LiveBench questions? Does the difference in FP and FN rates across the different prompts translate to significant differences in the Verification@k accuracy? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. We address your questions below. > “I’m skeptical of the author’s implicit scaling claim… it is unclear if increased generations improves generation quality or simply increases generation coverage.” We understand this concern and provided Figure 2 for exactly this reason. Figure 2 explicitly controls for the fact that coverage (Pass@k) increases with k by limiting our evaluation to questions on which the model has at least *one* correct answer (i.e., “ambiguous questions”). In fact, we also provided Figure 4 for this reason, which is exactly the same as Figure 1 except that it controls for “generation coverage”. Re Brown et al.: Brown et al. is concerned with the scaling of Pass@k (usually an unrealistic upper bound and not an operative metric); we are concerned with how much of the Pass@k - Pass@1 gap can actually be attained. > “I don’t follow the authors’ claim that increasing generations will increase the verifier’s likelihood of error, given that pass@k is known to increase.” The only problems on which verification can err are ambiguous problems (where there is at least one correct answer) and increasing k can make verification on ambiguous questions harder. This latter claim follows from basic probability: if you allow students more attempts at an exam, the average pass-rate of a student who passes at least one exam decreases. Similarly, as you increase k, the average number of correct solutions/question decreases. Thus, if implicit scaling didn’t exist, the lines in Figure 2 should all provably be non-increasing or even decreasing. > “The observation that reasoning models have weak out-of-box verification capabilities has been previously reported” We agree and note in the paper that “the limitations of model self-verification capabilities are well-studied”. This is why we believe our results are so surprising: while prior works have attempted to scale self-verification on frontier models, none have succeeded in beating self-consistency with direct self-verification and instead needed to turn to masking techniques, learning verifiers, process-based rewards, etc. In contrast, we show that direct self-verification can be scaled to reliably achieve o1-level performance without finetuning, RL, distillation, or custom models—something that our paper is the first to do. While we already cite 30+ papers on LLM self-verification, we will add a brief discussion on Wu et al. which is largely orthogonal to this work. > Paper structure. We appreciate the feedback and will look at revising the structure of the paper per your advice. Since the algorithm we present is fairly minimal, we wanted to highlight the scaling trends first. — Questions: > Are the results in Section 2 doing the same process as in Section 3.1? Section 2 omits tie-breaking for computational reasons but is otherwise the exact same process as Section 3.1 (including rewriting). Regarding rewriting, we see it as just part of the verification process—one that is sadly often overlooked. We don’t see how it undermines/confounds any of our reported results, e.g. implicit scaling. We provide the rewriting process in the source code (it’s pretty generic; we use the exact same instructions for all benchmarks), but will revise to include a copy in the Appendix. > Do you have a FLOPs comparison of the different settings of k_inf and k_ver? We are unable to publicly report the FLOPs used in closed-source models, but do detail the number of output tokens used (from which FLOPs can be extrapolated) and will include a more involved discussion. We focused on closed-source models in this paper as—at the time of this submission—no OSS models had reached the minimum level of capability needed for self-verification to scale. > For the questions in which the number of solutions which reached the correct are very low (eg. 1-2 generations), what fraction does Verification@k get correct? We will run a more formal analysis, but we can give some preliminary numbers. In particular, the AIME exam falls in the regime you’re asking about: on 12/15 questions, less than 4% of Gemini Pro solutions are correct. Nonetheless, Verification@200 gets >50% accuracy. On the 7/15 questions where exactly 1-4% of questions are correct, Verification@200 gets 5/7 correct. We agree that this would be impactful to include and appreciate the suggestion. > Could the authors clarify what is prompted during self-verification...? Why are the FP and FN rates missing for the LiveBench questions? Does the difference in FP and FN rates translate to... Verification@k accuracy? The full prompts for all four options referenced in Table 4 can be found in the source code; we will revise to include a copy in the Appendix. We added LiveBench at a later date and had not yet re-run Table 4, but will include them in our revision. We informally observed Verification@k accuracy to be especially sensitive to FN rates, and will add a formal table to this effect. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. I am more sympathetic to the authors clarifying the main contributions of their paper, which show that self-verification can be successful with sufficient scale and certain prompt rewritings. * I believe part of the confusion for the results, methods and claims of the paper is due to the structure. I understand that the algorithm is not incredibly complex but requires more clarification and context given that other reviewers have also brought this up. Along with properly explaining the algorithm before diving into results, I also believe the authors need to highlight the rewriting mechanism more because it seems this was an important part of the process (regarding Table 4 results and the fact it is 'often overlooked') * When the authors say "no OSS models had reached the minimum level of capability needed for self-verification to scale", what is this 'minimum level of capability'? It seems that for verification@k accuracy to increase with k, one needs a model with sufficient self-verification capabilities as well as a fat enough tail over outputs such that one is more often sampling a correct, high-quality response. I am willing to increase my score if the authors agree to revise the structure of the paper for further clarity and include the missing results. --- Reply to Comment 1.1.1: Comment: We appreciate the detailed feedback around the paper's structure and have indeed been revising the structure accordingly. It's actually quite refreshing to have such direct, helpful and actionable feedback. Thank you for the opportunity to clarify: even if we have a set of candidate solutions containing good solutions (e.g. generated by a capable enough model with a fat enough tail), we need a verifier model that meets a "minimum level of capability". Specifically, if the verifier model is sufficiently poor at verification, it may end up behaving like random sampling---producing results actually inferior to self-consistency---or worse (for example, weak models will often bias towards long, detailed solutions where the generating model actually went off the rails). It's hard to formally quantify this "level of capability", but we consider MATH and LiveBench Reasoning performance to be a reasonable proxy and anecdotally estimate GPT-4o/Gemini 2.0-Flash to be just above this minimum level.
Summary: This paper claims that while self-consistency can greatly improve LLM performance, leveraging additional test time compute to verify/compare generated responses can break the plateau for self-consistency and further enhance model performance, the paper conducts extensive experiments to validate their findings and propose a verification benchmark to evaluate the models’ verification capabilities. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, proposed methods and/or evaluation criteria make sense. Theoretical Claims: The paper focuses on empirical finding, no theoretical proofs or claims. Experimental Designs Or Analyses: Yes, I have reviewed the validity of the experimental designs and find them well-constructed and sufficient to support the paper's claims. Supplementary Material: The supplementary materials consist of code implementations. While I haven't examined the code in detail, it appears to be properly documented. Relation To Broader Scientific Literature: The finding that models' verification abilities can be enhanced through increased response sampling has potential for improving model performance. Essential References Not Discussed: This paper discussed the related works detailedly. Other Strengths And Weaknesses: **Strengths** 1. This paper identifies several previously unstudied principles of self-verification, including implicit scaling, output style suitability, and the importance of directly comparing responses, which is interesting and worth investigating, and promising to enhance LLM reasoning ability. 2. This paper provides a verification benchmark to evaluate LLM self-verification abilities. **Weaknesses** While the authors explicitly state that their work focuses on demonstrating scaling trends in LLM self-verification rather than optimizing specific strategies, I have two main concerns: 1. Self-verification appears to be a subset of LLM self-consistency, making the observed scaling trends somewhat predictable. 2. Although the authors differentiate their work through discoveries about output style suitability and the importance of direct response comparison, these contributions may not be substantial enough. Other Comments Or Suggestions: If there are any misunderstandings on my part, please point them out, and I will reconsider my evaluation of this work. Questions For Authors: I have to questions: 1. The relatively small gap between Pass@1 and Consistency@5 in Table 5 seems to contradict the paper's main findings about scaling trends. Could the authors explain this? 2. In Table 5, the random guessing entries show an 80%/20% split for Correct/Wrong responses rather than the expected 50%/50%. Could the authors explain this more detailedly? 3. Is there the evaluation result for Gemini v1.5 in Verification Benchmark? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review. To address your comments on “predictability”: Many papers have studied self-consistency and self-verification, and attempted to scale them up. None have reported the same success that we have; in fact, we aren’t aware of any prior works that have successfully applied self-verification to frontier models in a way that actually beats self-consistency. Instead, to beat self-consistency, the field has turned to training process-based reward models, using RL to learn verifiers, and other complicated interventions. This has led to a general belief that direct self-verification isn’t enough, whereas we show that it is—you just need to hit a certain threshold of scale and obey some important principles. While this may be intuitive, it’s not at all “predictable”. It's also noteworthy that direct self-verification can successfully and reliably extract o1-level performance from a non-reasoning model without any finetuning, distillation, RL, or custom-trained verifiers (again, no other works/methods—verification-based or otherwise—have achieved this). To your second concern: As we note above, our main contribution is showing that scaling search with self-verification *does work* on frontier models and providing an explanation for why: implicit scaling. Methods like direct comparisons are ways of taking advantage of implicit scaling, and are a big reason why our attempts at scaling self-verification were uniquely successful. For example, it is notoriously difficult to beat self-consistency when it comes to the MATH performance of frontier models that already get 90%+ accuracy (hence our paper is the first to do so). As we show in Table 3, direct comparison plays a key role in overcoming this “last-mile” barrier, enabling self-verification to actually beat self-consistency. — Individual questions: > “The relatively small gap between Pass@1 and Consistency@5 in Table 5.” The small gap between Pass@1 and Consistency@5 indicates that scaling self-verification accuracy via self-consistency is limited in effectiveness—this doesn’t contradict any of our findings. In fact, it’s part of our core argument: the techniques we propose (like direct comparisons) allow for more effective test-time scaling of self-verification than self-consistency. > "In Table 5, the random guessing entries show an 80/20 split for Correct/Wrong responses rather than the expected 50/50. Could the authors explain?" 50/50 and 80/20 are both attainable with random guessing (just change the bias of the coin you flip). Because the prior of frontier models is skewed towards 80/20, we wanted to provide a similarly calibrated random baseline and so opted to display 80/20 instead of 50/50. > "Is there the evaluation result for Gemini v1.5 in Verification Benchmark?" Good question; we exclude Gemini v1.5 from the benchmark since we use Gemini v1.5 to generate the candidate solutions. Evaluating Gemini v1.5 on the benchmark seemed unsound given the potential confounding.
null
null
null
null
Learning Mixtures of Experts with EM: A Mirror Descent Perspective
Accept (poster)
Summary: In this paper, the authors discussed the relationship between the EM algorithm and mirror descent in the context of the mixture of experts (MOE) learning. In the beginning, the authors proposed an overview of the EM-based parameter learning procedure in the MOE. On this basis, the authors then introduced the projected mirror descent (PMD), bridged the relationship between EM-based MOE learning and PMD-based MOE learning, and derived the convergence property of the EM-based MOE with the help of PMD. After that, they extend their proposed theorem to the Symmetric Mixture of Linear Experts (SymMoLinE) as a special case. Finally, various experiments are conducted to demonstrate the efficacy of the proposed approach. Claims And Evidence: Nearly all the claims made in the submission are supported by clear and convincing evidence. Nevertheless, the symbols seem quite strange in this manuscript. For example, in Eq. (1), the authors introduced $\phi$ but did not state what $\phi$ means. The parameter of some MOE models? In addition, on page 6, right column, `Additionally, while GD regularizes progress based on the Euclidean distance between iterates, EM adjusts progress based on the divergence between probability distributions across iterations. This is often more suitable for latent variable models, where small Euclidean changes may cause large shifts in the mixture distribution, and vice versa.`, although the reviewer buys in this statement, the authors should add extra citations or add related figures to back up this phenomenon. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria make sense for demonstrating the derivation of convergence analysis, the authors also provide experiments on the Fashion MINST dataset to demonstrate the efficacy of the proposed approach. Theoretical Claims: The reviewer attempts to check the derivation of the theorem. But there remain some issues to be addressed. 1. At the beginning, the authors are suggested to add a nomenclature to better facilitate the derivation. 2. On this basis, some symbols should be rectified. For example, $\mathbb{E}_{Z|\boldsymbol{x},y;\theta^t}$ should it be $q(Z|\boldsymbol{x},y;\theta^t)$? where the expectation operator is applied for some probability density function with the input $\boldsymbol{x}$ and $y$? 3. In appendix B.6, to the reviewer's understanding, the introduction of MIM is for proving the locally convex or relatively strongly convex region of $\mathcal{L}(\theta)$. Can we extend this condition to the generalized MOE model? 4. Can we directly use the concept of $\lambda$-convex during the proof to simplify and generalize the convergence condition? --- References: [1]. { Euclidean, Metric, and Wasserstein } Gradient Flows: an overview Experimental Designs Or Analyses: 1. In the context of deep learning, our training is always conducted by a mini-batch paradigm. Thus, during the evolution of mirror descent, the parameter may be perturbed by some noise. Specifically, according to the reviewer's understanding, the results should be: $\theta_{t+1} = \theta_t - lr\nabla_\theta\mathcal{L}(\theta) + \mathcal{N}(0,\sigma^2)$. The noise may improve the model performance to some extent, and the authors have not considered this issue. 2. To the reviewer's understanding, the change of initial value may affect the model performance greatly during model training. However, this part of the experiments has not been investigated. It would be better to add some asymptotic convergence analysis to sidestep conducting this experiment? Supplementary Material: The reviewer reviews the supplementary material. The reviewer suggests the reorganization of the supplementary material. For example, it would be better to put the special case SymMoLinE at the beginning to ease the reading. In addition, in the proof of $Q(\theta|\theta^t)$, the authors should first add sufficient statistics in the equation otherwise the introduction of $S(y,z)$ seems confusing. Finally, on page 13, in the last two equations, it seems that the symbol $\rangle$ is missed. Relation To Broader Scientific Literature: The MOE is of great importance in the era of LLM, and the authors proposed a novel understanding of why the GD-based MOE learning procedure does not take effect and how to improve it from the perspective of mirror descent. Essential References Not Discussed: The reviewer thought that the GD-based approach is mainly focused on the first-order parameter learning procedure, and the GD utilized in the manuscript is a non-mini-batch. Thus two related works may be considered: 1. Optimizer with Second Order Information like reference [1]. 2. Minibatch sampling effect for parameter learning [2]. --- References: [1]. Optimizing Neural Networks with Kronecker-factored Approximate Curvature (ICML 2015) [2]. Stochastic Gradient Hamiltonian Monte Carlo (ICML 2014) Other Strengths And Weaknesses: ### Strengths 1. The topic is related to the ICML conferences. 2. The proposed approach is interesting. 3. The derivation is rigorous. ### Weaknesses 1. The organization of the main content should be revised. For example, the parameter learning of MOE should be more compact, this part can be moved to supplementary material. 2. The illustration of the convergence analysis can be reformulated. For example, an introduction of SymMoLinE can be given then the generalization to other MOE structures can be given in the latter content. 3. Figures 2 and 3 have not proposed the error bars. 4. The results in Tables 1 and 2 have not included the statistically significant test. Other Comments Or Suggestions: See the abovementioned chat window. Questions For Authors: See the abovementioned chat window. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their constructive feedback in highlighting key strengths and limitations of our work. Below, we address the main points raised: - **Theory:** **The reviewer raised concerns about the nomenclature, use of KL vs L2 regularizer, the relationship between the MIM and the generalized result, and whether $\lambda$-convexity can be used to simplify and generalize the convergence condition.** We agree that adding a dedicated notation section and a table of contents in the supplementary materials would improve clarity. Regarding the convergence conditions in Theorem 4.2, we acknowledge that assuming $L$ is $\lambda$-convex relative to the mirror map $A$ would suffice to establish the claim. Since $L$ is $\lambda$-convex almost surely for all $x$, the same holds in expectation, satisfying our convergence conditions. However, this imposes a stronger condition on $L$, which may not hold in general for MoE. Instead, we provide weaker conditions that relax this requirement to hold (1) in expectation with respect to $x$ and (2) only within a neighborhood around the true parameters. We verified these assumptions in Appendix B for special cases of MoE (SymMoLogE and SymMoLinE). A similar argument applies to the MIM conditions. Lastly, KL divergence regularizer makes sense in the context of MoE since it is invariant to the choice of parametrization while the L2 norm is not (see Kunstner et al., 2021 for an in depth discussion). **Planned Revision:** We will include a notation section and table of contents in the Appendix and allocate space after Theorem 4.2 to discuss how our conditions relate to standard $\lambda$-relative convexity. - **Mini-Batch:** **In deep learning, training is typically done via mini-batches, which introduce noise that may impact model performance. This issue has not been considered.** We acknowledge the limitation of focusing on full-batch EM and GD. However, our study maintains consistency by considering full-batch versions in both theoretical and empirical analyses. Theoretically, since EM in our settings is equivalent to projected Mirror Descent, any theoretical result for mini-batch projected Mirror Descent would apply to EM. Empirically, following your recommendation and that of other reviewers, we conducted an additional experiment on the mini-batch training of a mixture of 5 lightweight CNN experts on CIFAR-10. The results align with our FMNIST experiments (see our reply to reviewer z9yn for details). **Planned Revision:** We will discuss challenges in extending our theoretical results to mini-batch EM and include our mini-batch empirical results on CIFAR-10 in Appendix C. - **Initialization:** **The reviewer suggests that initialization may significantly impact model performance and asks whether asymptotic convergence analysis could substitute for these experiments.** Theorem 5.2 and Corollary B.1 guarantee that EM will converge sub-linearly to a stationary point of $L$ regardless of initialization. Empirically, we tested the effect of initialization in the synthetic setting. We observed that for "close-enough" initialization, EM converged super-linearly to the true parameters, while for poor initialization, EM and GD converged sub-linearly, sometimes to suboptimal solutions. These findings align with previously known empirical results on EM (Kunstner et al., 2021; Xu and Jordan, 1996). **Planned Revision:** We will provide our empirical results on initialization for the synthetic setting in Appendix C. - **Three key areas for improvement:** **1) Reorganizing the main text for clarity while moving relevant parts to the supplementary material for completeness.** **2) Reordering key results for better comprehension.** **3) Strengthening empirical results with additional statistical metrics.** **Planned Revision:** 1) We will move lines 185-220 (left column) to the appendix and condense Section 5.1. For completeness, we will add a dedicated Appendix section covering all relevant introductory details. 2) We will provide a proof sketch for Theorem 4.2 (see our reply to reviewer yx5r). 3) Following your recommendation, we re-ran the FMNIST experiment (50 iterations, 25 instances, full batch) to include error bars and a paired t-test (implemented using scipy's ``ttest_rel"). We are also conducting this analysis for synthetic and CIFAR-10 experiments. Our statistical results are: - **EM vs GD:** T-statistic = 24.32, p-value = 0.000. - **Gradient EM vs GD:** T-statistic = 17.81, p-value = 0.000. We conclude that our results are statistically significant. See the updated Figure 3: [https://prnt.sc/p9rfbnIfycX2](https://prnt.sc/p9rfbnIfycX2). **Lastly, we thank the reviewer for any additional comments and encourage the reviewer to read "Summary of Planned Revisions Based on All Reviewers' Feedback" in rebuttal to reviewer z9yn. Have a nice day!** --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I intend to improve the score for this work. However, please ensure that the revised manuscript includes the theoretical discussion on mini-batch training as well as the supplementary experimental results.
Summary: This paper focuses on integrating MoE optimization and EM algorithm. The authors first proved the theoretical guarantees of EM algorithm for training MoE models. Then, the authors focus on the special case of mixture of 2 linear or logistic experts and analyze the guarantees for the linear convergence. Next, the authors conduct experiments on small-scale data to demonstrate that EM algorithm can outperforms the GD algorithm. Claims And Evidence: This paper provides detailed theoretical proof and experimental evaluation on special cases. Methods And Evaluation Criteria: The provided results are synthetic and small-scale datasets. It is hard to distinguish whether the proposed method is still effective on large-scale datasets. Moreover, this paper use two special cases to conduct the experiments, which is not so convincing. Moreover, the authors just compare EM algorithms with GD algorithm, which also limits the contribution of this paper. Theoretical Claims: The theoretical claims in this paper are convincing. I have checked all the claims in the main pages. Experimental Designs Or Analyses: As mentioned in “Methods And Evaluation Criteria”, the authors just select two special case to conduct experiments. Moreover, the used datasets are synthetic and small scale. Furthermore, the compared method is only GD algorithm. All of these settings limit the contribution and convincing of this paper. Supplementary Material: I have reviewed the supplementary material in Appendix A. Corresponding analysis is helpful for understanding the main content in the paper. Relation To Broader Scientific Literature: The key contribution can provide some insights for training better MoE methods or designing MoE-based methods. Essential References Not Discussed: The reference is sufficient Other Strengths And Weaknesses: Please refer to the “Methods And Evaluation Criteria” and “Experimental Designs Or Analyses” parts Other Comments Or Suggestions: 1. An additional notation table is helpful to follow the proof of this paper. 2. Experiments on large-scale datasets or diverse datasets can enhance the convincing of this paper. Questions For Authors: Please refer to the “Methods And Evaluation Criteria” and “Experimental Designs Or Analyses” parts Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their constructive feedback in highlighting key strengths and limitations of our work. Below, we address the main points raised: - **The provided results are synthetic and small-scale datasets. It is hard to distinguish whether the proposed method is still effective on large-scale datasets. Moreover, this paper uses two special cases to conduct the experiments, which is not so convincing. Moreover, the authors just compare EM algorithms with GD algorithm, which also limits the contribution of this paper.** We agree with the reviewer that the paper does not feature large-scale experiments which could help motivate practical use of EM for large-scale MoE training. As this paper is a purely theoretical work within the broad EM and MoE literature that provides correspondence between EM and MD and subsequent SNR conditions for linear convergence, we chose to restrict the empirical experiments to supporting our theoretical claims on the two sub-cases considered in Appendix B. Nonetheless, based on your review, we scaled our experiments from training a mixture of 2 logistic experts model on FMNIST to training a mixture of 5 lightweight CNN models on CIFAR-10 (620k parameters, batch_size=2500, 50 epochs, and 25 instances). We report that compared to GD, EM had an average final test accuracy of **41.6%**, which is **7.1%** higher than GD. Similarly, we report that compared to GD, Gradient EM had an average final test accuracy of **37.4%**, which is **2.9%** higher than GD, while GD had a final test accuracy of **34.5%**. A screenshot of the resulting plot can be found at: [https://prnt.sc/ZXBLKD2e8bN2](https://prnt.sc/ZXBLKD2e8bN2). Moreover, as per reviewer kaEY, we are also providing a paired t-test to check for statistical significance. - For **EM vs GD**, we report a **T-statistic of 25.15** (p-value of **0.000**). - For **Gradient EM vs GD**, we report a **T-statistic of 22.88** (p-value of **0.000**). We conclude that our results are **statistically significant**. **Planned Revision:** *We will include our CIFAR experiments in Appendix C to highlight preliminary results on scaling EM to 'large'-scale MoE.* - **An additional notation table is helpful to follow the proof of this paper.** **Planned Revision:** *We will include a notation section and table of contents at the beginning of the Appendix.* # Summary of Planned Revisions Based on All Reviewers' Feedback As part of our commitment to addressing the reviewers' valuable suggestions and concerns, we have outlined a set of planned revisions. We sincerely appreciate the constructive feedback and believe that our responses, along with these well-defined and feasible revisions, will demonstrate the paper’s readiness for acceptance into the conference. 1) **Readability:** To enhance clarity and accessibility, we will: A) streamline the main text by moving portions of the background on EM theory and MoE parameter estimation to the supplementary material, B) introduce a dedicated **notation section and table of contents** in the Appendix to improve readability, and C) use the additional space in the main text to provide a **proof sketch of Theorem 4.1** (see our reply to Reviewer yx5r to see our suggested proof sketch), highlighting key proof techniques that address previous gaps in the literature. 2) **Discussions:** We will: A) provide a thorough discussion regarding the **scalability of EM to large-scale and sparse MoE** as well as challenges in extending our **theoretical framework to mini-batch EM**, and B) expand our discussion on the class of models for which our results hold (general MoE), clarifying how our convergence conditions in Theorem 4.2 and Corollary B.1 compare to standard convexity assumptions for GD. Specifically, we will elaborate on their relationship to $\lambda$-(relative) convexity in the discussion following Theorem 4.2. 3) **Experiments:** To strengthen our empirical analysis, we will include **error bars in the plots and perform a statistical significance test** (e.g., a paired t-test) to support the claim that EM provides a meaningful improvement over GD (we have obtained these results, please see our reply to reviewer kaEY and z9yn). In Appendix C, we will: A) add a discussion connecting theoretical results to the experiments such as verifiability of the linear rate and satisfiability of assumptions $A_1$-$A_3$, B) include **initialization experiments** (see our reply to reviewer kaEY), C) provide our **experiments on mini-batch training of a mixture of 5 lightweight CNNs on CIFAR-10** (see our reply to reviewer z9yn), and D) provide all necessary implementation details in Appendix C (such as learning rate, precise model architecture, number of instances, etc.). We are looking forward to hearing your valuable feedback on our responses and proposed revisions. --- Rebuttal Comment 1.1: Comment: The authors have answered all my questions and I have no further questions.
Summary: This paper studies the relationship between EM for general MoE with projected mirror descent algorithms. Based on the equivalence between EM and mirror descent, this work provides non-asymptotic convergence rates for training MoE with EM. Claims And Evidence: Yes, this work provides solid proofs to the theorems and claims. Methods And Evaluation Criteria: This paper mainly focus on theoretically understanding the relationship between the EM method and the mirror descent method. Their theory demonstrates that the EM method is theoretically effective for training MoE. Theoretical Claims: I briefly checked the proof of Theorem 4.2, the main theorem. Experimental Designs Or Analyses: My expertise in MoE experiments is limited, making it difficult to assess their validity. However, I believe the experimental results can support the theoretical claims to some extents as a theoretical paper. Supplementary Material: I check the parameter tuning in MNISTMOE-FC.html. I would be better if the learning rate setting are uniform for fairness. Relation To Broader Scientific Literature: The key contributions in this paper primarily follow [Kunstner et al., 2021]. This work extends the results for MoE model and considers the mixtures of regression setting. Essential References Not Discussed: I am not aware of any missing references. Other Strengths And Weaknesses: Strengths: - This paper is well-written and easy-to-follow. It clearly presents its contributions, and the relevant theoretical assumptions are well-stated. - This work provides a new approach for analyzing the convergence rate of MoE training when using EM method. - When the $\beta$-update in the EM method has a closed-form solution (as is the case for linear experts, mentioned in the paper), the EM approach does not require tuning learning rates, potentially offering a more robust learning algorithm. Weakness: - The assumptions on the distribution are restricted to the exponential family, which may be overly idealized compared to realistic applications. - The EM method might potentially introduce additional computational costs and is more difficult to implement compared to the gradient descent method, limiting the applicability of the proposed method in the paper. Other Comments Or Suggestions: I suggest that the author provide a more technical highlight of the challenges or analytical innovations involved in establishing equivalence in MoE, compared to [Kunstner et al., 2021]. This could enhance the paper’s impact and make it more accessible for future research to build upon. Questions For Authors: - In line 147, could an additional parameter be introduced to consider the distribution $p(z \mid x; w) \propto \exp(\gamma x^\top w)$? - The analysis in the paper relies on the convexity of the function L. How can the obtained results be further interpreted in the context of non-convex optimization in neural networks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We begin by extending our thanks to the reviewer for the very constructive feedback in highlighting important strengths and limitations of our work. We address the main points raised below: - **Concerns:** **1) The assumptions on the distribution are restricted to the exponential family. 2) The EM method might potentially introduce additional computational costs and is more difficult to implement compared to GD.** 1) Whilst we agree with the reviewer that our results only hold for realizable settings where the conditional distribution belongs to an exponential family, we maintain that the theoretical results obtained in this paper constitute a significant improvement over the previous literature, extending the connection between EM and MD from simpler model classes like Mixture of Gaussians to a more practically relevant class today like that of General MoE described in Section 2. 2) To address this concern, the empirical results in Figures 1, 2, and 3 also include the Gradient EM algorithm ("GradEM") which is a variant of the EM algorithm with per-iteration cost identical to that of GD. Even then, we still observe an improvement over GD, though less significant. **Planned Revision:** *We will include a thorough discussion of the limitations, discussing the scalability of the EM algorithm to large-scale and sparse MoE.* - **Suggestion:** **Author provide a more technical highlight of the challenges or analytical innovations involved in establishing equivalence in MoE, compared to [Kunstner et al., 2021].** This is a very good point. We clarify that in Kunstner et al. Theorem 1, the assumption that $p(y,z,x|\theta)$ is an exponential family of distributions allows to decompose $Q(\theta|\theta^t)$ to show direct equivalence between EM and MD. In our results, we relax this assumption, only requiring the conditional distribution $p(y,z|x, \theta)$ to belong to an exponential family of distributions (this includes more complicated mixtures like MoE). With this relaxation, we can only decompose $Q(\theta|\theta^t)$ inside the expectation with respect to x. Then, we utilize a decomposition on $\nabla L$ to relate the expression inside the expectation to the point-wise (in x) iterations of MD. Finally, this allows we show direct equivalence between the iterations of EM and a projection over the iterations of MD. **Planned Revision:** *We will make space in the main text to provide a proof sketch of Theorem 4.1 that will highlight the proof techniques that allow us to bridge this key gap in the literature. See our proposed proof sketch in the screenshot: https://prnt.sc/wMDRIY7neary.* - **Questions:** **1) In line 147, could an additional parameter be introduced to consider the distribution $p(z|x;w) \propto \exp\{\gamma x^\top w\}$?** We clarify that our results hold for any probability functions $p(z|x;w)$ and $p(y|z,x;\beta)$ so long as $p(y,z|x,\theta)$ belongs to an exponential family. Since the norm of w is unbounded in our setting, introducing the new parameter $\gamma$ does not provide a generalization of the problem. **2) The analysis in the paper relies on the convexity of $L$. How can the obtained results be further interpreted in the context of non-convex optimization?** We clarify that none of our results explicitly require that the objective function $L$ is convex, as such an assumption does not generally hold for MoE. Instead, we require weaker conditions; Theorem 4.2 requires that the algorithm is initialized inside some neighborhood around the true parameter $\theta^*$ that satisfies inequality (20) (or (21)), wherein a similar notion to convexity is maintained in expectation over feature variable x (similarly for Corollary B.1). We emphasize that the form of convergence results we obtain is consistent with previous works on EM such as [Kunstner et al. 2021, Balakrishnan et al. 2017]. If it is assumed that $p(y|x,z;\beta)$ is Gaussian, our result in Theorem 4.1 can be readily applied to the setting where the experts are neural networks. In appendix B, we uncovered an interpretable result for SymMoLogE and SymMoLinE that, when training MoE with EM, one should initialize the gating parameter with a norm much smaller than that of the expert parameters, a finding that was observed empirically on page 10 of [Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity, 2022]. **Planned Revision:** *We will enhance the discussion of the class of models for which our results hold and how our conditions for convergence differ from standard assumptions of convexity. We will also add a discussion on how to scale our theory to more complex experts such as neural networks.* **Lastly, we thank the reviewer for any additional comments and encourage the reviewer to read "Summary of Planned Revisions Based on All Reviewers' Feedback" in rebuttal to reviewer z9yn. Have a nice day!**
Summary: This paper discusses the relationship between the EM algorithm and MoE models. In particular, a relationship is shown between the EM update of the experts and router parameters and Mirror Descent (MD) with a specific expression for the Bregman divergence regulariser. Claims And Evidence: The paper claims to extend the result of Kunstner et al. (2021) beyond exponential family mixtures, which is that an EM update of MoE can be interpreted as a MD with KL divergence regularizer. This includes Mixture of Linear and Logistic Experts. Methods And Evaluation Criteria: The paper demonstrates the relevance of the result by conducting experiments on both synthetic data and Fashion MNIST, where the goal is to compare the convergence of vanilla GD, gradient-based EM and EM. It is shown that (i) EM converges faster than gradient-based EM, and (ii) EM outperforms the vanilla GD algorithm in terms of final performance (e.g. accuracy or cross-entropy), which makes sense. Theoretical Claims: The main theoretical claim is that EM is a projected Mirror Descent algorithm for general MoE, extending the result of Kunstner et al. (2021), which focuses on exponential family mixtures. This conducted in Convergence Guarantees is adapted for General MoE in Section 4.1. The result is then applied to SymMoLinE and SymMoLogE, providing a closed-form expression of the Mirror Map in this special case. Experimental Designs Or Analyses: The experiments are conducted on both synthetic data with SymMoLinE and SymMoLogE, and Fashion MNIST with random color image inversion to model the need for 2 experts. It shows the effectiveness of the EM algorithm in convergence speed and performance in those settings. Supplementary Material: The derivations are well-written and sound correct. Relation To Broader Scientific Literature: The paper is mainly built upon Kunstner et al. (2021), which derived a connection between EM and MD, and Makkuva et al. (2019), Becker et al. (2020) which relates EM and MoE. It extends Kunstner et al. (2021) to general MoE that includes non-exponential family mixtures. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths: The paper is well-written and is easy to read. The scope of the theoretical contribution compared to Kunstner et al., (2021) is clear, and the results are relevant in the context of MoE, as it helps to improve their interpretation. The experimental results also highlight the practical advantages of EM-based optimization. Weaknesses: Limitations of the work are not mentioned. Question 1: Are there still settings where standard GD (and momentum-based variants, such as Adam) has advantages over EM-based optimization in the context of MoE? Furthermore, more details about the experimental setup (needed to reproduce the results: e.g., batch size, and learning rate used) are required in Appendix C. Other Comments Or Suggestions: Typo: It should be $p(y | x, z ; \theta)$ instead of $p(y | x ; \theta)$ in the second equality when deriving the decomposition (10) in section 3 (L 203). It would be great to improve the connection between the experimental and theoretical results. Question 2: To what extent are the assumptions of $A_1-A_3$ verified in the experiments? Question 3: What are the practical implications for the future of MoE of this connection with MD? Questions For Authors: Question 4: Did the asymptotic convergence error of the GD in Figures 1 and 2 reach the one of the EM and Gradient EM? Question 5: How do you explain that the Single Expert in Table 1 trained without Random Inversion reaches a better accuracy ($83.2 \\%$) compared to the three 2-Component MoLogE methods in Table 2? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We begin by extending our thanks to the reviewer for the very constructive feedback in highlighting important strengths and limitations of our work. We address the main points raised below: - **The limitations of the work are not mentioned** Thanks for your feedback. We would like to mention that the main limitations of our theoretical results are that they hold for the realizable setting and extension to agnostic setting require further investigation. Also, currently we only investigated the full-batch EM, while we believe extension to mini-batch case is possible, as illustrated by our new experiments. We will include these points in the revised paper, and in particular, will mention them in the revised conclusion. - **More details about the experimental setup (needed to reproduce the results: e.g., batch size, learning rate used) are required in Appendix C.** That is a valid point. In the revised paper, we will add all details necessary to reproduce our empirical results to Appendix C. Thanks for your comment. - **Question 1:** **Are there still settings where standard GD (and momentum-based variants, such as Adam) has advantages over EM-based optimization in the context of MoE?** This is a very good point. We have theoretically shown that EM requires relative convexity to hold in expectation and within a neighborhood of the true parameters to ensure linear convergence. These sufficient conditions differ from those of GD, which requires strong convexity within a neighborhood around the true parameters for linear convergence. The key insight is that we characterize the conditions under which EM enters and remains in the region where linear convergence holds, whereas it remains unclear under what conditions GD iterates satisfy the local neighborhood assumptions required for linear convergence. That said, this does not imply that EM is necessarily superior to GD or vice versa. We believe there are settings where EM may converge linearly with higher probability than GD and its variants, and vice versa. - **Question 2:** **To what extent are the assumptions A₁ - A₃ verified in the experiments?** For the synthetic experiments, assumptions A₁-A₃ are verifiably true; the conditions of Theorem 5.1 hold exactly. As for the experiment on FMNIST, we are not in the realizable setting, so it is safe to assume the assumptions do not hold. Still, the empirical results suggest that EM is robust to non-realizability. We will add this point to the revised paper. Thanks for raising this point. - **Question 3:** **What are the practical implications for the future of MoE of this connection with MD?** An example of a practical implication we rigorously showed in Appendix B.6 and B.7 is that it is preferable to initialize the gating parameters of the MoE with a much smaller norm relative to that of the expert parameters. This finding corroborates similar **empirical** findings reported on page 10 of *Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity, 2022*. These corroborating results suggest that initializing the gating parameters with a small norm is a universally good heuristic when training MoE (We will add this to the discussion). - **Question 4:** **Did the asymptotic convergence error of GD in Figures 1 and 2 reach that of EM and Gradient EM?** This is a very good question. Interestingly, we did not observe that this was the case, but we cannot deterministically conclude that it wouldn't have if we kept training for many more iterations. - **Question 5:** **How do you explain that the Single Expert in Table 1 trained without Random Inversion reaches a better accuracy (83.2%) compared to the three 2-Component MoLogE methods in Table 2?** Firstly, we clarify that results in Table 2 are obtained on the mixed dataset with random inversions whereas the single expert with accuracy 83.2% was trained on the homogeneous dataset. There are three reasons for this: 1. **The mixed dataset is more difficult to learn than the homogeneous one.** This is observed as we report the single expert having a test accuracy of 10.2% after training on the mixed dataset, whereas training on the homogeneous dataset yielded a test accuracy of 83.2%. 2. **Information sharing:** Each expert now trains on half as many images as in the base case. 3. **Task difficulty:** Learning both expert and gating parameters to correctly separate the inverted and non-inverted images—and then correctly label each—is a more difficult task than labeling all inverted or non-inverted images. In fact, we observed in our experiments that the best-performing MoE model was learning to route samples based on whether they were inverted or non-inverted. **Lastly, we thank the reviewer for any additional comments and encourage the reviewer to read "Summary of Planned Revisions Based on All Reviewers' Feedback" in rebuttal to reviewer z9yn. Have a nice day!** --- Rebuttal Comment 1.1: Comment: Thank you for your response, I appreciate it! For the next revision of your paper, could you provide an updated version of Figures 1 and 2 (perhaps in the Appendix) with additional iterations? This would help us better understand the asymptotic behavior of GD. Additionally, you could run the Fashion MNIST experiments with multiple training seeds. I will raise my score accordingly, I think the paper deserves acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for your reply and comments! As per your suggestion, we have updated our Figures 1-3 plots to include more iterations as can be found at the following screenshot link: https://prnt.sc/40azK725P6SF. We note that, for Figure 1 b), we could not plot the y-axis on the log-scale like before because the error bars would not play nice when the log-likelihood error was of the order of $10^{-10}$ and below. We've also renewed our test of significance and confirm that the results have not changed (p-value of 0.000 in each instance). We also clarify that in each of the 25 FMNIST instances, we sample a new initialization of the MoE (similarly in the synthetic experiments). As a result, we do not believe that changing the seed will impact the results. Still, we will make this change for our mini-batch experiment on CIFAR-10.
null
null
null
null
null
null
FlashTP: Fused, Sparsity-Aware Tensor Product for Machine Learning Interatomic Potentials
Accept (spotlight poster)
Summary: The paper presents FlashTP, a highly optimized tensor-product library designed to address computational inefficiencies through kernel fusion, sparse computation, and path-aggregated execution. The proposed approach significantly accelerates tensor-product operations in equivariant neural networks. Claims And Evidence: Overall, this paper provides clear evidence to support its claims. In Section 3, the authors conduct a detailed empirical analysis of the computational bottlenecks in tensor-product operations. In Section 4, they explicitly explain how their proposed methods address these issues one by one. This one-to-one correspondence in problem identification and solution design, to some extent, substantiates the potential effectiveness of their approach. Furthermore, in the evaluation section, the paper presents comprehensive ablation studies, including comparisons with baseline methods in forward, backward, and double-backward propagation scenarios, as well as end-to-end experiments within neural networks. These results collectively demonstrate that the proposed approach successfully accelerates tensor-product computations, thereby validating the claim that it enhances the efficiency of equivariant neural networks. Methods And Evaluation Criteria: Overall, the experimental design in this paper is suitable for addressing the proposed problems, though further optimization or clarification is needed in the benchmarking aspect. The benchmarks chosen for the single tensor-product (tp) tests are e3nn and cuEquivariance, while for the end-to-end network tests, SevenNet-l3i5 is used. To my knowledge, cuEquivariance uses DiffDock and MACE for its end-to-end experiments, also demonstrating the difference between its implementation and theoretical limits. Based on this, 1. why were DiffDock and MACE not chosen or included as additional end-to-end benchmarks for a more direct comparison of performance between the two methods? 2. Does your method have any theoretical limit differences when compared to cuEquivariance? In the experiments, the chosen node feature dimension is 32, which is common, but it shows a noticeable gap compared to the typical dimensions (128 or 256) used in other popular equivariant networks, such as the Equiformer series. 3. Does your method scale well to tensor features with a larger channel size? If so, is this expansion a simple extension, or does it require additional specialized design or implementation? 4. Is there a channel size limit based on the CUDA kernel register storage limits? [1]Liao Y L, Smidt T. Equiformer: Equivariant graph attention transformer for 3d atomistic graphs[J]. arXiv preprint arXiv:2206.11990, 2022. [2]Liao Y L, Wood B, Das A, et al. Equiformerv2: Improved equivariant transformer for scaling to higher-degree representations[J]. arXiv preprint arXiv:2306.12059, 2023. [3]https://developer.nvidia.com/blog/accelerate-drug-and-material-discovery-with-new-math-library-nvidia-cuequivariance/ Theoretical Claims: The paper does not contain much content regarding theoretical claims. It would be beneficial to add a discussion about the theoretical maximum speed of the proposed method, as well as a scaling analysis of the tensor product with respect to the tensor order in comparison to the baseline methods. Experimental Designs Or Analyses: 5. In Tables 2 and 3, CuEq shows significant performance degradation when l_max > 2, even falling below e3nn. This suggests that CuEq may not have implemented adaptive modifications or optimizations for these cases. Have you checked the correctness of CuEq's execution in these cases or analyzed potential reasons for this behavior? In this context, does the comparison with CuEq still hold significant value? 6. If we focus only on the cases where l_max ≤ 2, the acceleration of FlashTP relative to CuEq seems limited, with only the Double-Backward acceleration being significantly improved. Is this behavior related to the specific design of your method? 7. According to Figure 4, assuming that tensor-product operations account for 60.3% and 75.2% of the inference and training time in SevenNet-l3i5, does this imply an upper limit on the speedup (i.e., when the tensor-product time is optimized to 0) of approximately 2.5x for inference and 4x for training? How does this conclusion reconcile with the reported speedups of "4.2× and 3.5×" for inference and training, respectively? Supplementary Material: The supplementary materials include the source code for the method described in the paper. Relation To Broader Scientific Literature: I believe the paper has sufficiently discussed the connection between tensor product computations and various prediction tasks in scientific fields. Essential References Not Discussed: The authors should consider including a discussion on acceleration methods beyond cuEquivariance, particularly in the context of other approaches that improve efficiency. For instance: - Acceleration based on sparsity: [1] Wang N, Lin C, Bronstein M, et al. "Towards Flexible, Efficient, and Effective Tensor Product Networks" (NeurIPS 2023 Workshop: New Frontiers in Graph Learning). - Acceleration based on frequency domain: [2] Luo S, Chen T, Krishnapriyan A S. "Enabling Efficient Equivariant Operations in the Fourier Basis via Gaunt Tensor Products" (arXiv preprint arXiv:2401.10216, 2024). [3] Xie Y Q, Daigavane A, Kotak M, et al. "The price of freedom: Exploring tradeoffs between expressivity and computational efficiency in equivariant tensor products" (ICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling, 2024). Other Strengths And Weaknesses: Strengths: 1. The paper clearly describes the background and necessity of tensor-product acceleration, providing a detailed analysis of the speed bottlenecks in current tensor-product computations. This leads naturally to the proposed solution and evaluations, presenting a coherent story with a very smooth and readable writing style. 2. The method section offers clear motivation and solutions, and the source code is provided in the appendix, which enhances the credibility and reproducibility of the work. 3. The experimental design in the method section is comprehensive, particularly with the inclusion of the Double-Backward time tests, ensuring its applicability to MD tasks. The results also demonstrate significant speedups, confirming the potential of the method in real-world applications. Weaknesses: 1. The paper lacks a discussion and validation on whether the proposed method can be applied to other general equivariant models, which would significantly impact the overall quality of the paper and its real influence in the community. 2. The experimental section still contains aspects that need further clarification and optimization to strengthen the credibility of the results and to establish the speed advantages over existing methods. Further detailed discussions in other sections can serve as a reference. 3. While the method presented in the paper is effective, there seems to be limited claim about its originality and innovation. Additionally, since the paper does not have a section discussing related works, the innovations in FlashTP, as described in Chapter 4, are not directly compared to existing methods. For example, Chapter 4.3 discusses "Applying Sparsity in Tensor-Product," but the application of CG sparsity has already been discussed and applied earlier, as seen in the referenced papers mentioned above. Other Comments Or Suggestions: N/A Questions For Authors: Most of the questions have already been raised in the sections above. Additionally: 8. Do you have plans to open-source the work presented in this paper? 9. Regarding Inter-Layer Kernel Fusion, is the fusion heavily tied to the message-passing function, making its extension to other networks a non-trivial task? Overall, I am inclined to believe that there is significant engineering work behind this paper, but I still have concerns about the reliability of the experimental results, the innovation and generalization of the method. If the authors are unable to clarify or address these issues, I may reconsider my evaluation. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. # R1. Why Diffdock/MACE was not selected for end-to-end evaluation [Q1] - DiffDock is a diffusion-based molecular docking model, not a MLIP model. Its CGTP configuration is different from the configurations used in MLIPs, so it falls outside the scope of FlashTP. - Please refer our response R2 to Reviewer LtnP for why SevenNet was chosen over MACE. # R2. Roofline analysis of FlashTP [Q2] - We estimate the theoretical performance limits of FlashTP using a roofline analysis based on the peak capabilities of the A100 GPU (19.5 TFLOPS and 1.9 TB/s memory bandwidth). Under this model, the estimated latency for the kernel microbenchmark with $l_{\text{max}} = 3$ is 0.46 ms for the forward pass, 0.89 ms for the backward pass, and 1.35 ms for the double-backward pass. - While the exact derivation of the "Speed of Light" (SoL) latencies reported in the cuEq blog [1] is not publicly documented, FlashTP’s exploitation of sparsity in the Clebsch–Gordan coefficient matrix reduces both theoretical memory traffic and floating-point operations. Consequently, the SoL latency for FlashTP is expected to be lower than that of cuEq. # R3. Channel scaling in FlashTP [Q3, Q4] - FlashTP supports various channel sizes without requiring any modifications, and the channel size is **not** limited by CUDA kernel register storage. The SevenNet variants used in the end-to-end evaluation employ multiple channel sizes (32, 64, and 128) for their features. - Below, we report the kernel microbenchmark result (in milliseconds) for $l_{\text{max}} = 3$ with channel sizes of 32, 64, and 128. These results demonstrate that the performance improvement of FlashTP remains consistent across different channel sizes. |Phase|Channel|e3nn|FlashTP (Speedup)| |-|-|-|-| |Forward|32|18.40|2.54 (7.2x)| ||64|34.25|5.02 (6.8x)| ||128|66.80|9.98 (6.7x)| |Backward|32|85.00|6.15 (13.8x)| ||64|163.96|12.14 (13.5x)| ||128|324.73|24.19 (13.4x)| |Double Backward|32|254.13|8.73 (29.1x)| ||64|503.28|17.41 (28.9x)| ||128|1003.95 |34.81 (28.8x)| # R4. Discussion on cuEq performance characteristics [Q5, Q6] - Please view our response R3 to Reviewer LtnP. # R5. Contributions regarding sparsity - First, it is important to clarify that the sparsity discussed in [2] refers to pruning-based sparsity in CG tensor-product (CGTP) paths, not the sparsity in the Clebsch–Gordan (CG) coefficient matrix exploited by FlashTP. These are orthogonal techniques. - While the idea of leveraging sparsity in CG matrices has been explored (e.g., [3]), prior attempts have not yielded practical speedups. In fact, the sparse CGTP implementation in [3] was slower than its dense counterpart. This is often due to the overhead from metadata and irregular control flow present in sparse GPU computations, which can outweigh computational gains—a well-known issue in deep learning [4]. - In contrast, we believe FlashTP makes a significant contribution by successfully leveraging sparsity to obtain measurable performance improvements as demonstrated in our ablation study. This is enabled by introduction of efficient sparse data structure and careful kernel design that leverages constant memory. # R6. Discrepancy between Figure 4 and reported inference speedup [Q7] We apologize for the confusion and would like to clarify the source of the discrepancy: - The difference stems from the number of atoms (or more precisely, edges) used in the two settings. Inference breakdown in Figure 4 is obtained from the forward pass during training (~500 atoms), whereas the reported inference speedup is based on a MD simulation on a larger system with 4,096 atoms. In the 4K-atom setting, the tensor product accounts for approximately 88% of the total inference time, suggesting a maximum achievable speedup of around 8×. We will update Figure 4 in our final draft. # R7. Discussion about other acceleration methods - Please view our response R3 to Reviewer hLSR. # R8. Will FlashTP be open-sourced [Q8] - Yes, the code will be open-sourced. For more details on our future plans, please refer to our response R4 to Reviewer hLSR. # R9. Removing the inter-layer fusion [Q9] - Without the use of message-passing, inter-layer fusion becomes unnecessary. Removing this fusion requires only a minimal code change, which we have implemented for debugging purposes. # References [1] https://developer.nvidia.com/blog/accelerate-drug-and-material-discovery-with-new-math-library-nvidia-cuequivariance \ [2] Wang et al., Towards Flexible, Efficient, and Effective Tensor Product Networks, NeurIPS 2023 GLFrontiers Workshop.\ [3] Xie et al., The price of freedom: Exploring tradeoffs between expressivity and computational efficiency in equivariant tensor products, ICML 2024 Workshop GRaM.\ [4] Wen, et al., Learning structured sparsity in deep neural networks, NeurIPS 2016.
Summary: This paper presents FlashTP, an optimized tensor-product library designed to improve the computational efficiency of equivariant machine-learning models that employ spherical tensors. The authors identify three key inefficiencies in existing tensor-product layers: excessive memory traffic from intermediate data, memory spikes from large output tensors, and lack of sparsity exploitation in the Clebsch-Gordan coefficients. FlashTP addresses these challenges through kernel fusion, sparse tensor-product computation, and path-aggregated execution. Experimental results show significant speedups over e3nn and NVIDIA cuEquivariance. ## update after rebuttal The authors addressed all my questions. I increased my score to 4 and recommend this work for publication. Claims And Evidence: The paper claims that FlashTP significantly accelerates tensor-product operations while reducing memory usage. The experimental results support this claim, showing notable performance improvements across different benchmarks. However, the comparison with cuEquivariance is primarily numerical, lacking a discussion of key differences. It remains unclear whether FlashTP's speedup stems from algorithmic advancements or benefits from being more constrained to architectures similar to NequIP and SevenNet. Methods And Evaluation Criteria: The proposed library is well-aligned with the problem, and the evaluation consists of runtime and memory benchmarks on an NVIDIA A100 GPU. While these benchmarks provide valuable insight, the paper could benefit from additional evaluations on different MLIP architectures, particularly models incorporating symmetric contraction (e.g., MACE). Theoretical Claims: The paper suggests that higher-rank tensors (larger $l_\mathrm{max}$ values) are important to consider when improving the numerical efficiency of tensor-product layers in models designed for atomistic simulations, using this argument to support the advantages over NVIDIA cuEquivariance. However, this claim is not entirely accurate—higher-rank tensors are generally only necessary in environments with higher local symmetry, typically relaxed in atomistic simulations. A more general argument beyond atomistic simulations might provide a stronger justification. Experimental Designs Or Analyses: The experiments convincingly show that FlashTP improves computational efficiency but are focused solely on models using spherical tensors. A broader comparison would strengthen the evaluation by mentioning Cartesian-based models and architectures and incorporating symmetric contraction for spherical models. Supplementary Material: All parts of the supplementary material have been reviewed. It lacks important details on the SevenNet and NequIP architectures. Relation To Broader Scientific Literature: The paper addresses an important issue in the context of equivariant machine learning models. However, it focuses exclusively on models in the spherical basis and does not mention other bases, such as the Cartesian one. Including a discussion of these alternatives would offer a more comprehensive understanding of how FlashTP fits within the broader landscape of MLIP development. Some references for Cartesian models include: https://arxiv.org/abs/2306.06482, https://arxiv.org/abs/2405.14253, and https://arxiv.org/abs/2412.18263 (and references therein). Essential References Not Discussed: The paper does not mention works that explore machine-learned force fields in bases other than spherical. Given the growing body of research on Cartesian representations (see, e.g., above references), mentioning these studies could offer valuable context for the paper’s claims. Other Strengths And Weaknesses: Please refer to the comments in previous sections for all strengths and weaknesses of the presented work. Other Comments Or Suggestions: Please refer to the comments in previous sections for all comments and suggestions. Questions For Authors: My questions would relate to the issues or comments raised above, so addressing them would suffice to change my evaluation of the paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. # R1. Discussion regarding Cartesian-based models - Please refer to the last paragraph of our response R3 to Reviewer hLSR. # R2. Choice of SevenNet over MACE for end-to-end evaluation - We chose SevenNet over MACE for end-to-end evaluation because 1) we believe SevenNet more accurately reflects the current SOTA, as demonstrated by its strong performance on a widely used benchmark [1]; and 2) its architecture is more broadly representative than that of MACE. SevenNet is primarily composed of CGTP-based interactions—a design principle shared by many other leading models, such as GNoME and Allegro. In contrast, MACE introduces a unique operation called _symmetric contraction_ on top of CGTP, which is specific only to MACE and limits its generality as a benchmark. - Meanwhile, FlashTP can still be applied to accelerate MACE through a hybrid approach with cuEq. Specifically, the CGTP components within MACE can benefit from FlashTP, while the symmetric contraction operation—which falls outside the scope of FlashTP—can continue to be handled by cuEq. In our evaluation, this complementary use of FlashTP and cuEq reduces the per-epoch training time of the MACE-MP Large model (fp32) by 1.2× compared to using cuEq alone. # R3. Speculation on the performance differences between FlashTP and cuEq Since cuEq’s CUDA implementation is not open-source, identifying the exact performance bottlenecks of cuEq is infeasible. As a result, we refrained from including potentially premature speculation in the manuscript. Nonetheless, we have conducted an in-depth analysis of cuEq to the extent possible. Below, we share our informed speculations regarding the observed performance differences between FlashTP and cuEq. ## Performance degradation of cuEq for higher values of $l_{max}$. - cuEq launches different CUDA kernels depending on the CGTP configuration. In particular, the kernel used for CGTPs with $l_{max} \leq 2$ (cuEq-fast) differs from the one used for $l_{max} > 2$ (cuEq-slow). - Nsight Compute profiling indicates that while both cuEq-fast and cuEq-slow are L1-bandwidth-bound, their L1 utilization differs markedly: cuEq-fast achieves high L1 utilization, whereas cuEq-slow exhibits significantly lower L1 utilization. We believe this disparity is a primary contributor to the degraded performance of cuEq at higher $l_{\text{max}}$ values. Note that our references to L1 bandwidth/utilization refer to the combined bandwidth/usage of the L1 cache and shared memory, which share the same underlying hardware. - We speculate that the difference in L1 utilization arises from how operands are managed. Specifically, cuEq-slow shows substantially higher shared memory usage compared to cuEq-fast. Our hypothesis is that cuEq-fast handles operands primarily using registers, while cuEq-slow relies more on shared memory due to operand sizes exceeding register capacity. This increased reliance on shared memory may reduce kernel occupancy, which in turn lowers L1 utilization and ultimately impacts performance. ## Source of speedup for FlashTP - FlashTP employs a shared memory–based design regardless of the $l_{max​}$ value, yet it does not experience performance degradation due to low occupancy and consistently outperforms cuEq. This is because FlashTP substantially reduces overall memory traffic—including L1 traffic—such that low occupancy and the resulting lower L1 utilization do not become performance bottlenecks. A suite of memory traffic optimization techniques, including effective kernel fusion and path aggregation, are the key differentiator that sets FlashTP apart from cuEq in both efficiency and scalability. # R4. On the practical benefits of higher $l_{max}$ - While it is true that increasing $l_{max}$​ can lead to diminishing returns in accuracy relative to the added computational cost, numerous empirical studies show that modest increases can yield meaningful improvements. For example, models such as MACE (using $l_{max}$​ from 0 to 2), SevenNet (2–3), NequIP (0–3), eSCN (6), and EquiformerV2 (6) demonstrate improved accuracy when employing higher $l_{max}$​ settings. The range of $l_{max}$​ used in our main evaluation (up to 5) is therefore well-aligned with standard practice in MLIP research and reflects a practically useful regime. # R5. Architectural details of SevenNet variants and NequIP usage - The tensor-product configurations used for the kernel microbenchmarks are detailed in Table 4 of Appendix A.1. - **NequIP** was _not_ used in the evaluation. - We apologize for the omission of configuration details for the three SevenNet variants. This information will be added to Appendix A.4 in the final draft. # References [1] Dunn et al., Benchmarking Materials Property Prediction Methods: The Matbench Test Set and Automatminer Reference Algorithm, npj Computational Materials 2020. --- Rebuttal Comment 1.1: Comment: Based on the authors' response, I will raise my score from 3 to 4. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful comments and for taking the time to consider our response. We’re glad our clarifications helped address your concerns, and we sincerely appreciate your updated score and recognition of the work’s contributions.
Summary: In this paper, the authors develop FlashTP, an optimized tensor-product library that uses kernel fusion, sparse computation, and path-aggregated execution. FlashTP achieves significant performance improvement in terms of increasing throughput and decreasing memory usage compared to common libraries, e3nn and cuEquivariance. They authors validation the speed improvements using the SevenNet-l3i5 model. Claims And Evidence: All of the claims made in the paper are directly supported by evidence. The authors make claims about the inefficiencies of current TP operations, and support these claims in Figures 4, 6 and in Table 1. All of the claims about speed-ups relative to e3nn and cuEquivariance are well supported by extensive experimental results. Additionally, the authors provide a good ablation study to demonstrate the contributions of each part of FlashTP. Methods And Evaluation Criteria: The two main efficiency metrics of interest to downstream researchers are throughput/latency and GPU memory usage, which are the primary metrics reported in the paper. There exist more sophisticated ways to quantify performance such as the Roofline model, and the evaluation/discussion might be better if such metrics were used, but the reality is that MLFF researchers only care about how fast the model runs and how much memory they will need to run it. For downstream model evaluations, the authors use SevenNet-l3i5, which is a fair representative model for equivariant MLFFs as it shares the same backbone (NequIP) as many other models (MACE, GNoME, etc.) The authors use a reasonable dataset to evaluate on (MPF). The choice of the dataset doesn't really matter and the authors include a study on scaling the number of atoms in the system, thereby demonstrating applicability to nearly all other MLFF datasets. Theoretical Claims: N/A -- no theoretical claims are made. Experimental Designs Or Analyses: As mentioned in the Methods And Evaluation Criteria section, all the experiments/analysis are well designed are the models/datasets used are good representative samples of the MLFF research community as a whole. I would not expect to see noticeably different results on different models/datasets, at least in terms of the relative performance of FlashTP to e3NN/cuEquivariance. Supplementary Material: I looked over the Python code and interface for the TP operations, which seems reasonable. I briefly checked the lower level implementations, but I am not familiar with writing these kernels and can't comment on this. I did not attempt to run the code and reproduce their results. Relation To Broader Scientific Literature: MLFF foundation models have recently become incredibly powerful and useful for a variety of downstream tasks. Most of the top performing foundation models heavily rely on TP operations, and as such are incredibly slow. Additionally, the TP operations are known to be bounded by memory latency and often have poor GPU utilization. This work takes a massive step in this direction towards speeding up these foundation models. Essential References Not Discussed: On the MLFF side, one potential reference that is missing is [1], which is an orthogonal approach used to improve the speed of TP operations. I am not very familiar with low-level ML systems research and likely would not be aware of any missing references. However, one paper that comes to mind that maybe should have been included is FlashAttention [1], which famously used kernel fusion to accelerate LLMs. [1] Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs, Passaro et al, https://arxiv.org/abs/2302.03655 [2] FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness, Dao et al, https://arxiv.org/abs/2205.14135 Other Strengths And Weaknesses: Strengths: - S1: The paper is well written and provides good explanations of why existing TP libraries have huge performance bottlenecks. Figure 6 is a fantastic illustration of this. - S2: FlashTP has very strong performance improvements over existing libraries - S3: The experiments in the paper are well designed and representative of MLFF research/use cases. - S4: The authors provide an initial (and minimal) codebase for FlashTP. - S5: The paper includes a good ablation study to further analyze the creation of their fused kernels. Weaknesses: - W1: It is impractical to train top MLFF foundation models on a single GPU. Does FlashTP currently work on multi-GPU and can the authors provide such experiments? - W2: This paper has the potential to be massively impactful, but only if the authors continue to work to build out their FlashTP library including writing good tutorials and adding more of the existing functionality in e3nn. - W3: The authors evaluation of GPU performance evaluation could be more nuanced using something like the Roofline model Other Comments Or Suggestions: ### Update After Rebuttal The authors include some additional experiments to show that the proposed method works in a more realistic multi-GPU training setup and improved the performance analysis using the Roofline model. This is a strong work the potential to have a large impact on the MLFF community. Questions For Authors: - Q1: The main thing preventing me from giving this work a 5 is that they do not evaluate on multi-GPU training. Can the authors provide such an evaluation and demonstrate performance scaling on as many GPUs as possible? - Q2: What is the authors plan to deploy the proposed code? There are many different operations/utilities in e3nn, do the authors plan to release a more developed software library to replace e3nn? For MLFF researchers, the ideal case would be for FlashTP kernels to be integrated into the existing e3nn codebase to maximize code reusability. In my view, this work is only a strong contribution to the community if the authors can continue to develop the software for people to use. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. # R1. Evaluation on multi-GPU training [W1, Q1] - Multi-GPU training with FlashTP can be performed using PyTorch's Distributed Data Parallel (DDP). The table below shows the one-epoch training time (in seconds) for SevenNet-l3i5 on the MPF dataset, using varying numbers of GPU nodes. Each node consists of eight NVIDIA A100 GPUs (80 GB), interconnected with NVLink and NVSwitch, while inter-node communication is handled via a 100 GB/s network. A per-GPU batch size of 16 was used for all runs. FlashTP exhibits strong scalability across multiple GPUs. The modest reduction in speedup at higher GPU counts is likely due to increased inter-node communication overhead. | # of Nodes (# of GPUs) |1 (8)|2 (16)|4 (32)|8 (64)| |-|-|-|-|-| |e3nn|735|399|218|123| |FlashTP|164|87|49|30| |Speedup|4.5|4.6|4.4|4.1| # R2. Essential references not discussed - FlashAttention is a well-known work that optimizes self-attention via kernel fusion, and it served as an inspiration for the naming of FlashTP. We will include a reference to FlashAttention in Section 4.2 of the final draft. # R3. Acceleration of Clebsch-Gordan Tensor Product (CGTP) We appreciate the reviewers' insightful comments regarding alternative acceleration methods for CGTP. Below, we summarize several relevant approaches. We will incorporate this discussion into the final draft. - **SO(2) Tensor Product [1]**: This technique reduces the number of non-zero elements in the Clebsch–Gordan coefficient matrix by aligning the axis via rotation, thereby lowering computational complexity. This method is orthogonal to FlashTP and can be synergistic—FlashTP can also benefit from a reduced number of non-zero elements in the input. - **Gaunt Tensor Product (GTP) [2]**: GTP leverages Fast Fourier Transforms to accelerate CGTP computations by operating in the frequency domain. While computationally efficient, it trades off some expressivity compared to CGTP. Due to the symmetric nature of its operations, GTP is unable to capture chiral features in 3D structures [3]. - **Fused Tensor Product (FTP) [4]**: FTP aggregates all irreducible representations (irreps) into a single tensor, applies native matrix multiplication, and subsequently decomposes the result to recover the original irreps. This reduces computational cost compared to CGTPs, but similar to GTP, it exhibits reduced expressivity [3]. While it has slightly higher computational complexity than GTP, it is asymmetric operations, making it more suitable for capturing parity-sensitive interactions. - **Cartesian-based models**: These models employ Irreducible Cartesian Tensors (ICT) instead of spherical harmonics as the basis. Recent studies have demonstrated promising performance in both accuracy and latency, particularly at lower ranks, when applied to MLIP tasks [5,6]. Nonetheless, CGTP-based models remain the dominant paradigm—top-performing entries in widely used MLIP benchmarks such as OC20 and Matbench continue to rely on CGTP, underscoring the critical need for further acceleration of CGTP [7,8]. # R4. Future plans for FlashTP [W2, Q2] FlashTP is already deployed in production at a major tech company to deliver SOTA performance for their simulation workloads. We are also in active discussions with another major tech company to integrate FlashTP into their widely used open-source library for equivariant neural networks. - We plan to release FlashTP as a standalone Python package and integrate it into the e3nn codebase by implementing a variant of the `TensorProduct` class that utilizes FlashTP under the hood. - To facilitate reproducibility and adoption by the broader community, we will include a benchmarking script and a comprehensive integration example in the repository. # R5. Roofline model analysis [W3] - Please refer to our response R2 to Reviewer GZbV for the theoretical performance of FlashTP estimated from roofline analysis. - The roofline analysis from Nsight Compute shows that FlashTP kernels are compute-bound, whereas kernels of cuEq are L1 memory-bound. For more analysis on the differences between FlashTP and cuEq, please see our response R3 to Reviewer LtnP. # References [1] Passaro et al., Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs, ICML 2023.\ [2] Luo et al., Enabling Efficient Equivariant Operations in the Fourier Basis via Gaunt Tensor Products, ICLR 2024.\ [3] Xie et al., The price of freedom: Exploring tradeoffs between expressivity and computational efficiency in equivariant tensor products, ICML 2024 Workshop GRaM.\ [4] Unke et al., E3x: E(3)-equivariant deep learning made easy, CoRR 2024.\ [5] Simeon et al., TensorNet: Cartesian tensor representations for efficient learning of molecular potentials, NeurIPS 2023.\ [6] Zaverkin et al., Higher-Rank Irreducible Cartesian Tensors for Equivariant Message Passing, NeurIPS 2024.\ [7] https://opencatalystproject.org/leaderboard.html \ [8] https://matbench-discovery.materialsproject.org --- Rebuttal Comment 1.1: Comment: I appreciate the response from the authors. The results of the multi-GPU and roofline analyses are strong and I highly recommend the authors include both of them in the revised manuscript, ideally in the main text. I will keep my score at a 4 only because I don't think this work has broad enough impact to be an oral paper, but I recommend that this work be accepted as a spotlight paper. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful comments and encouraging feedback. As suggested, we will incorporate both the multi-GPU and roofline analyses into the revised manuscript. We also sincerely appreciate your recognition of our work and your recommendation for it to be considered as a spotlight paper.
null
null
null
null
null
null
null
null
Bayesian Optimization from Human Feedback: Near-Optimal Regret Bounds
Accept (poster)
Summary: This paper studies the kernel-based decision making problems under preferential feedbacks. The author proposed the phased-elimination style algorithm (which is refered to as MR-LPF in the paper) for this problem, which leads to $O(\sqrt{T \gamma_T})$ cumulative regret, while the existing algorithm suffers from additional polynomial dependence of $T$ and the complexity term arising from the non-linearity of the link function. The author also conducted a numerical experiment in a real-world motivated problem, although the baseline method is limited to the existing UCB-based algorithm. Claims And Evidence: The author claimed contributions of this paper are well supported by theoretical perspective, and I could not find any issues as far as I checked. Methods And Evaluation Criteria: The author's proposed method can be interpreted as preferential variants of the existing phased-elimination or batched pure exploration-based approach. It makes sense to propose such a method to achieve better theoretical performance. Theoretical Claims: I checked all the theoretical claims except for the proof of Theorem 4.7 (Appendix C). The proof is based on the standard techniques of GP-bandits, and I could not find any questionable point regarding the correctness of the theoretical claims. On the other hand, one of my concerns is whether the algorithm's near optimality, which the author appeals to in the paper, is rigorously verified. The existing Scarlett's lower bound is given for the Gaussian reward, and its connection with the preferential feedback model, whose feedbacks are given based on the comparison of chosen pairs of query points and the realization of Bernoulli distribution, is not straightforward for me. In my intuition, the near-optimality of $O(\sqrt{\gamma_T T})$ in this paper's setup is true (may be rigorously proved by modifying the KL divergence of the Gaussian in Scarlett's proof); however, I believe more careful arguments should be added to the revision if the author claims the optimality. If not, the presentation (including title) of this paper should be modified by mildening the claim relating to near-optimality. Experimental Designs Or Analyses: We only check the description of experimental sections in the main paper. The synthetic experiments conducted in this paper are a standard setup for the kernel bandit or Bayesian optimization field, and we did not find any issue in the setup. One concern is that the average regrets at the initial step (t = 0) are different. To make the fair comparison, I believe that the initial points of the experiments should be the same in this setup. Supplementary Material: I only checked Appendix A and B. Relation To Broader Scientific Literature: This paper's results show improved results of the existing theoretical guarantees of kernel-based dueling bandits. Essential References Not Discussed: To my knowledge, the existing results relating to this paper are well discussed in Section 1.1. Other Strengths And Weaknesses: My main concern about this paper's weakness is the novelty. The algorithm construction itself seems to be the naive extension of the standard phased elimination. As far as I see the proof, the difference between the existing phased elimination is as follows: 1. To keep $k_\tau$ small, the learner has to allocate enough samples for the initial phase (the proof from Line 690). 2. The non-adaptive confidence bound for preferential feedback (Theorem 4.7). I did not check the details of the point 2 so far; however, this part seems non-trivial as far as my knowledge, and does not follow directly from existing literature's technique. Therefore, my current score is leaning toward acceptance. Other Comments Or Suggestions: - Line 646: (x - x')^2 -> |x - x'|^2 - Eq. (18): the second x_t should be modified to x_t' - The definition of R (e.g., Theorem 4.1, Line 782): log log (T) -> log_2 log_2 (T) - Line 817: O(log^d T) -> O(log^{d+1} T) (see, Vakilli's paper). - The arguments, why the maximum information gain of the kernel Eq. (4) increases at the same rate as the kernel $k$, should be added for the reader who is not familiar with this field. I did not find any arguments in existing papers, including the existing Pásztor's paper. Questions For Authors: Please see "Theoretical Claims" section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the positive feedback and careful review of the technical material, which is truly invaluable for us. Below, we address your comments and questions, which we hope will enhance your evaluation of the paper. > Interpretation of the lower bound of Scarlett et al., 2017 provided for standard BO with scalar feedback. We appreciate your raising this subtle point. Here we provide a technical clarification on our result and the mentioned lower bound. Consider a standard BO setting with noisy scalar observations $o = f(x) + \varepsilon$ (using the notation at the beginning of Section 2.3), where noise terms $\varepsilon$ are zero-mean, independent, and identically distributed across samples. Suppose at each step we select two points $x_t$ and $x'_t$, but instead of observing scalar feedback $o_t$ and $o'_t$ (with $o_t=f(x_t)+\varepsilon_t$ and $o'_t=f(x'_t)+\varepsilon'_t$), we receive preference feedback as a Bernoulli random variable $y_t = I(o_t > o'_t)$. Under the Bradley–Terry–Luce model, this preference feedback corresponds exactly to the case where the noise difference $\varepsilon'_t - \varepsilon_t$ follows a logistic distribution, whose CDF is the sigmoid function: $P(\varepsilon'_t - \varepsilon_t < f(x)-f(x')) = \mu(f(x)-f(x'))$. To have this satisfied, the individual noise terms $\varepsilon_t$ can be Gumbel-distributed. Thus, the lower bound on regret in our BOHF setting should be at least twice the lower bound for standard BO with scalar observations under Gumbel noise. In our revision, we will explicitly state this as a lemma and provide all the details. We acknowledge your remark that Scarlett et al. (2017) specifically use Gaussian noise in their lower bound example construction. However, it is standard practice in the BO literature to compare regret bounds under general sub-Gaussian noise (including bounded noise) with the lower bound result from Scarlett et al. (2017); see, for example, Salgia et al., 2021 and Li & Scarlett, 2022 among many others. Following your comment, we will explicitly state this result as a lemma in Section 4 and clarify the noise distribution assumption from Scarlett et al. (2017) in the revision. We are happy to discuss this further or answer any additional questions. > Novelty: We agree with the reviewer’s understanding of our theoretical results. Within the multi-round structure of our algorithm, we carefully characterize the impact of curvature and successfully remove the dependency on $\kappa$. Moreover, we leverage Theorem 4.7 to further improve regret with respect to $T$. These algorithmic and analytical contributions result in regret bounds of the same order as standard BO, providing a substantial improvement over existing methods, whose bounds can become vacuous for many kernels of theoretical and practical importance. We support these results through synthetic experiments on RKHS functions, which closely align with our theoretical assumptions and corroborate our analytical results. Additionally, we show the utility of our algorithm using a real-world Yelp dataset, highlighting that we have introduced a practical algorithm accompanied by strong theoretical guarantees. > Other Comments Or Suggestions: *The average regrets at the initial step (t = 0) are different:* This is due to arbitrary implementation choices which does not have a significant effect on eventual performance seen in the figures. We will fix this in the revision. *Why the maximum information gain of the kernel Eq. (4) increases at the same rate as the kernel?* In Proposition 4 of Pásztor et al. (2024), it is shown (see their Appendix C.1) that the eigenvalues of the dueling kernel $\mathbb{k}$ are exactly twice those of the original kernel $k$. Since the maximum information gain scales with the decay rate of the kernel eigenvalues (Vakili et al., 2021b, Corollary 1), both kernels exhibit the same scaling of the information gain. This property was also implicitly used in Pásztor et al. (2024). Following your comment, we will make this point explicit and clear. We fixed the typos mentioned by the reviewer. Thanks. --- We again thank you for your invaluable technical comments, which significantly enhance the presentation of the work. We hope addressing your comments improves your evaluation of the paper and remain available for further discussion. --- Rebuttal Comment 1.1: Comment: Thank you for the author's clarification. I understand that the author considers the reduction to the lower bound under the Gumbel noise sequence. However, I disagree with the following author's claim: > However, it is standard practice in the BO literature to compare regret bounds under general sub-Gaussian noise (including bounded noise) with the lower bound result from Scarlett et al. (2017); see, for example, Salgia et al., 2021 and Li & Scarlett, 2022 among many others. The existing works (including Salgia et al, 2021 and Li & Scarlett, 2022) show the regret upper bound under sub-Gaussian noise. This is enough to show the near-optimality of the algorithm since the regret upper bound under sub-Gaussian noise random variables also holds under Gaussian noise random variables. On the other hand, the author's argument only provides the fact that the lower bound of the preferential setting is the same rate as the standard problem lower bound under Gumbel noise. This does not formally imply the optimality of the algorithm since the lower bound is only provided under Gaussian noise, not the more general sub-Gaussian noise. Therefore, I strongly recommend that the author modify the claim relating to near-optimality. I believe this modification will not be critical to this paper's results. --- Reply to Comment 1.1.1: Comment: We agree with the reviewer and will make this point clear in the revision. Specifically, as noted above, and following your comment, we will explicitly state the reduction to scalar feedback with Gumbel noise as a lemma in Section 4. We will also clarify that a direct comparison with the lower bound in Scarlett et al. (2017) is not formally adequate, since their construction assumes Gaussian noise, whereas our model relies on preference feedback that corresponds to Gumbel noise. We refer to the connection only as an informal indication of the quality of our results, not as a formal order optimality claim. Thank you again for your detailed and technical review. We greatly appreciate your engagement with the review process, which we believe significantly improved the paper.
Summary: This paper studies Bayesian optimization (BO) with preference feedback, in which every time a pair of inputs are selected and only a binary preference feedback is observed. The paper incorporates preference feedback into a multi-round structure inspired by previous works and prove that the resulting algorithm achieves $\sqrt{\Gamma(T)T}$ regret which is nearly optimal. Claims And Evidence: The paper claims that the proposed algorithm for BOHF can achieve a regret upper bound of $\sqrt{\Gamma(T)T}$, which is nearly optimal and matches the order of regret for scalar-valued feedback. These claims are supported by rigorous theoretical analysis, and the regret bound appears to be (nearly) tight. However, the experimental results show that the proposed method performs better only for very large horizons, which weakens the claim of practical superiority over existing methods like MaxMinLCB. Methods And Evaluation Criteria: The methods are generally appropriate for the problem studies, as the paper effectively adapts previous techniques for batched Bayesian optimization to the preference feedback setting. However, there may be a practical limitation: the proposed algorithm does not utilize preference observations when selecting the queries within a batch, which could potentially improve performance. The method is evaluated based on regret from both selected arms, which is standard in the literature of dueling bandits and hence acceptable. Theoretical Claims: The most important strength of the paper is the theoretical results, since it provides a regret bound with the order of $\sqrt{\Gamma(T)T}$, which is near-optimal for the problem setting. I also appreciate the insights given in the theoretical analysis section (especially the one at bottom of page 6), which provides an explanation as to why the algorithm can achieve such a small regret. However, the technical novelty of the theoretical analysis is a concern. From a high level, the analysis techniques appear to a combination of those from existing works (Li & Scarlett, 2022; Pasztor et al., 2024; Xu et al., 2024). A clearer explanation of the technical challenges in the proof would enhance the contribution. Experimental Designs Or Analyses: The experimental results have a notable weakness: the proposed algorithm only outperforms the existing method of MaxMinLCB when the horizon is very large. This limits its practical applicability in scenarios with smaller observation budgets. So, the experiments do not convincingly demonstrate practical superiority across a range of settings. Supplementary Material: I did not get the check the detailed analysis in the appendix, but the main theoretical results in the main paper make sense to me, and are indeed what one would expect from such an algorithm. Relation To Broader Scientific Literature: The paper builds on prior works in batched Bayesian optimization and dueling bandits (or preferential BO), particularly combining techniques from works such as Li & Scarlett, 2022, Pasztor et al., 2024, and Xu et al., 2024. The paper indeed makes an important theoretical contribution to the fields of Bayesian optimization and dueling bandits. Essential References Not Discussed: The following is an importance missing reference: “Neural Dueling Bandits. 2024”. The paper uses neural networks to model the nonlinear function in dueling bandits and is hence highly related to the current paper. Other Strengths And Weaknesses: Strengths: - The proposed algorithm achieves a near-optimal regret order, which is, to the best of my knowledge, the tightest regret bound for BOHF problems. Weaknesses: - The technical novelty is a concern, as the analysis seems to combine existing techniques without clearly discussing the technical challenges involved. - The proposed algorithm does not use the preference observations when selecting queries in a batch, which may limit its practical performance. - Experiments show poor performance for small horizons, which is a critical limitation for practical applications. Other Comments Or Suggestions: The paper would benefit from a clearer explanation of the technical challenges involved in the proof, as this would help establish the novelty of the theoretical contributions. Additionally, incorporating preference observations into the query selection process could potentially improve the algorithm's performance and address its main practical limitation (i.e., it may help the proposed algorithm perform better than MaxMinLCB even with small horizons). Questions For Authors: - Could **the constants** in the regret bound for BOHF be worse than those for scalar-valued feedback? Intuitively, weaker feedback should degrade performance. Clarifying this could provide a more complete picture of the theoretical guarantees. - The proposed algorithm does not make use of the preference observations when selecting queries within a batch? In practice, incorporating these observations should lead to better performance. Could the algorithm be extended to account for them? - What are the technical challenges in the proof that distinguish this work from prior analyses (e.g., Li & Scarlett, 2022; Pasztor et al., 2024; Xu et al., 2024)? A clear explanation of these challenges would help establish the paper's novelty. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed, comprehensive, and constructive feedback. We are glad that you found the theoretical results strong and the provided insights useful. Below, we address the questions, which we hope will help clarify and enhance your evaluation of the paper. > Could the constants in the regret bound for BOHF be worse than those for scalar-valued feedback? We naturally expect the constants to be worse, as the reviewer suggested, given that standard BO provides stronger feedback. While it is beyond the scope of this rebuttal to characterize the constants precisely—we note that even in standard BO, despite its extensive literature, there is no clear characterization of such constants—we are inspired by your comment to run experiments comparing empirical regret. Specifically, we will compare the regret, measured in terms of the underlying function values, for our BOHF algorithm and standard BO algorithms such as GP-UCB and BPE. Intuitively and as expected, we anticipate observing lower empirical regret for standard BO. > The proposed algorithm does not make use of the preference observations when selecting queries within a batch? In practice, incorporating these observations should lead to better performance. Could the algorithm be extended to account for them? The confidence intervals are explicitly utilized at the end of each round to eliminate actions that are unlikely to be optimal, making effective use of preference predictions. However, incorporating preference predictions within each round introduces intricate statistical dependencies that invalidate our current analysis framework, including the validity of confidence intervals derived at the end of such rounds. In short, employing preferences both during and at the end of each round cannot be handled with our current theoretical tools and would require further theoretical development. Empirically, the strong eventual performance of our algorithm relative to the state of the art demonstrates that our current approach—using preferences solely at the end of each round—is already effective in practice. > Technical challenges: Although inspired by existing methods, we introduce notable algorithmic and analytical novelties. In particular, the multi-round structure enables careful control of the dependency on $\kappa$, as detailed in our analysis and discussed at the bottom of page 6. Moreover, this structure allows us to leverage the new confidence interval established in Theorem 4.7, further improving the regret bounds with respect to $T$. Together, these improvements contribute to our near-optimal performance guarantees—a substantial improvement over the state of the art, which often results in possibly vacuous regret bounds in many cases of theoretical and practical interest such as Matérn and Neural Tangent kernels. > Missing reference: “Neural Dueling Bandits. 2024”. Thank you for mentioning this missing reference. We will include a review in the revision. They consider a wide neural network for target function prediction, in contrast to the kernel methods used in BOHF. However, their results are closely related, as a wide neural network can be approximated by the Neural Tangent Kernel as the layer widths grow. Beyond this modeling difference, their algorithm also differs from ours as well as from those of Pásztor et al., 2024 and Xu et al., 2024. Their method, referred to as Neural Dueling Bandit UCB (NDB-UCB), selects one action as the maximizer of the prediction and the other as the maximizer of the UCB. They also introduce a variant where the second action is selected via Thompson sampling instead of UCB. Their performance guarantees are expressed in terms of the effective dimension of the neural network model (see their equation (4)), which resembles the complexity term $\Gamma(T)$ in BOHF. They assume that the effective dimension grows slower than $O(\sqrt{T})$, which is a limitation, as otherwise their regret bounds become vacuous. Their regret bounds also scale with the curvature parameter $\kappa$. *Verma, Arun, et al. "Neural Dueling Bandits: Preference-Based Optimization with Human Feedback." The Thirteenth International Conference on Learning Representations, 2025.* --- Thank you again for your positive and constructive review. We are happy to answer any further questions during the rebuttal period.
Summary: This paper proposes a new algorithm, Multi-Round Learning from Preference-based Feedback (MR-LPF), for Bayesian Optimization from Human Feedback (BOHF). MR-LPF achieves a significantly improved regret bound of $\tilde{O}(\sqrt{Γ(T)T})$, matching the optimal regret bounds of conventional Bayesian optimization and eliminating the dependency on the curvature of the link function present in prior work. The algorithm operates in rounds, selecting pairs of actions with maximum uncertainty for preference feedback and iteratively refining a set of potentially optimal actions using confidence bounds. Claims And Evidence: The theoretical claims regarding the regret bound of the MR-LPF algorithm are supported by rigorous mathematical proofs presented in the appendices. These proofs build upon established techniques in Bayesian optimization and kernel methods, providing a clear and convincing argument for the improved theoretical performance. However, the empirical evidence presented to demonstrate the practical superiority of MR-LPF compared to existing methods is less convincing. While the figures show MR-LPF achieving lower regret than MaxMinLCB in the tested scenarios, the improvement is often modest, and the figures do not clearly illustrate the theoretical order-of-magnitude improvement suggested by the regret bounds. Further experiments with a wider range of benchmark functions and baselines, potentially including statistical significance testing, would be needed to fully substantiate the claim of superior practical performance. Specifically, the paper lacks the comparsion with POP-BO (Xu et al., 2024), making the experiments less convincing. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria generally make sense for the problem at hand. Theoretical Claims: I have not thoroughly checked every step of the proofs, which are complex and contained in the appendices. Experimental Designs Or Analyses: 1. While the Yelp dataset provides a real-world application, the experimental setup is relatively limited. As your initial answer mentions, the paper itself highlights the increasing importance of preference learning due to LLMs. However, no experiments directly involve LLMs. A more compelling demonstration of practical utility would involve a task where preference feedback is elicited from human interactions with an LLM, for example, prompt optimization or text summarization comparison. This would more closely align the experiments with the motivating applications. 2. A crucial comparison is missing: POP-BO (Xu et al., 2024). POP-BO is a direct competitor in the BOHF literature, and omitting it significantly weakens the empirical evaluation. Without this comparison, it's difficult to definitively assess MR-LPF's relative performance within the current state-of-the-art. Supplementary Material: I reviewed the supplementary material, focusing primarily on Appendices A, B, C, and D. Appendix A provides background on RKHS and Mercer's Theorem, which is necessary for understanding the theoretical underpinnings. Appendix B contains the detailed proof of the main regret bound (Theorem 4.1) and the corollaries related to sample complexity. Appendix C presents the proof of the key confidence interval result (Theorem 4.7). Appendix D provides the experimental details, including descriptions of the test functions, the Yelp dataset preprocessing, hyperparameter settings, and computational resources used, as well as a brief description of the MaxMinLCB algorithm. I examined these sections to understand the theoretical justifications and the experimental setup more thoroughly. Relation To Broader Scientific Literature: The key contribution of this paper, a tighter regret bound for Bayesian Optimization from Human Feedback (BOHF), builds upon and significantly improves existing work in several areas. It relates to the broader literature on conventional Bayesian optimization (BO), where order-optimal regret bounds of $\tilde{O}(\sqrt{Γ(T)T})$ have been established for algorithms like GP-UCB and GP-TS (Srinivas et al., 2009; Chowdhury & Gopalan, 2017). The paper directly addresses the limitations of prior BOHF algorithms, specifically MaxMinLCB (Pásztor et al., 2024) and POP-BO (Xu et al., 2024), which had weaker regret bounds $\tilde{O}((Γ(T)κ^2\sqrt{T})$ and $\tilde{O}((Γ(T)T)^{\frac{3}{4}})$ respectively). The paper's achievement of a regret bound matching that of conventional BO, despite using the weaker preference-based feedback, is a notable advancement, demonstrating that the same sample complexities are achievable. The multi-round structure of the MR-LPF algorithm is inspired by, but distinct from, the Batch Pure Exploration (BPE) algorithm (Li & Scarlett, 2022) in conventional BO. It also connects to, but distinguishes itself from, dueling bandits and reinforcement learning from human feedback (RLHF) literature by focusing on kernel-based settings rather than tabular or linear settings. Essential References Not Discussed: As far as i know, there is no more related paper needing to be cited in the paper. Other Strengths And Weaknesses: ## Strengths: - The paper's main strength is its theoretical contribution. It provides a new algorithm, MR-LPF, for BOHF and proves a significantly improved regret bound. This matches the order-optimal regret bounds of conventional Bayesian optimization and, crucially, eliminates the dependence on the curvature of the link function (κ) that plagued prior work. This is a substantial theoretical advancement in the field. - The MR-LPF algorithm is well-motivated and clearly presented. The multi-round structure, with its variance-based action selection and confidence-bound-based pruning, is intuitive and logically sound. The algorithm description is easy to follow. - The paper provides detailed proofs for its theoretical claims, building upon established techniques in Bayesian optimization and kernel methods. The appendices contain the necessary mathematical derivations. ## Weaknesses: - The one of the most significant weakness is the omission of a comparison with POP-BO (Xu et al., 2024), a direct and highly relevant competitor in the BOHF literature. This makes it difficult to assess the relative performance of MR-LPF within the current state-of-the-art. - While the experiments show MR-LPF performing better than MaxMinLCB (only after a large number of queries), the improvement is often not dramatic, and the figures don't clearly showcase the theoretical order-of-magnitude improvement. - Despite motivating BOHF with applications like prompt optimization for LLMs, the paper does not include any experiments directly involving LLMs. This creates a disconnect between the stated motivation and the empirical evaluation. Other Comments Or Suggestions: N.A. Questions For Authors: 1. Why was POP-BO (Xu et al., 2024) not included as a baseline in the empirical evaluation? 2. Could you clarify the derivation of the inequality in Equation (42) within the proof of Lemma C.1? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the comprehensive review and positive feedback on our work. We are glad that you found our theoretical contributions substantial and the algorithm well-motivated and clearly presented. Below, we respond to your questions and comments, which we hope will further clarify and improve the paper. > Comparison with POP-BO (Xu et al., 2024) > Question 1: Why was POP-BO (Xu et al., 2024) not included as a baseline in the empirical evaluation? Theoretically, as discussed in detail in the introduction and summarized in Table 1, POP-BO provides the weakest regret bound among existing methods. In contrast, we achieve near-optimal regret bounds, explicitly showing that the number of preferential feedback samples required to identify near-optimal actions matches the order of scalar-valued feedback samples. Both MaxMinLCB and POP-BO exhibit vacuous regret bounds for kernels of practical and theoretical importance, such as Matérn and Neural Tangent kernels—with POP-BO being the weaker of the two. Empirically, we selected MaxMinLCB as the comparative baseline because Pásztor et al. (2024) have already demonstrated that MaxMinLCB outperforms POP-BO as well as several heuristic approaches. Thus, our choice ensures that our algorithm is compared against the strongest and most relevant available baseline. We appreciate the reviewer highlighting this point and will clarify this rationale in the revision. > While the experiments show MR-LPF performing better than MaxMinLCB, the improvement is often not dramatic. We fully agree with this observation. Indeed, the main contribution of our paper is analytical, and we establish substantial theoretical improvements over existing methods. The theoretical bounds we provide are *upper bounds* on performance. Based on the lower bounds for standard Bayesian optimization from Scarlett et al. (2017), our results are essentially unimprovable. It remains an open question whether the weaker bounds of Pásztor et al. (2024) and Xu et al. (2024) are artifacts of their proof techniques or reflect fundamental limitations of their algorithms—that is, whether the suboptimal upper bounds in those works truly capture the algorithms’ performance or whether tighter bounds could be derived for their algorithms. Nonetheless, our experiments show that the empirical performance of our algorithm is consistently strong across a range of synthetic RKHS examples that closely follow the theoretical assumptions, as well as on a real-world Yelp dataset. The goal of these experiments is to demonstrate that our contributions are not merely theoretical—they also yield a practical and robust algorithm with solid empirical performance, consistently outperforming the current state-of-the-art. That said, the main focus of our paper is to establish the core theoretical result: that the same number of preference samples are sufficient to identify near-optimal actions as in standard Bayesian optimization with scalar-valued feedback. > Despite motivating BOHF with applications like prompt optimization for LLMs, the paper does not include any experiments directly involving LLMs. This creates a disconnect between the stated motivation and the empirical evaluation. We agree with the reviewer that our experiments do not directly involve learning tasks using LLMs—we only use OpenAI's text embeddings model to generate vector embeddings for Yelp reviews. However, the BOHF framework is strongly motivated by an expanding body of work closely related to dueling bandits and reinforcement learning from human feedback (RLHF). This motivation is further supported by the growing number of recent papers addressing the same or similar problems. Our method can also be directly applied to prompt optimization for LLMs. However, such experiments require extensive setup and thorough implementations, which we view as an important and promising direction for a separate future work. > Question 2. Could you clarify the derivation of the inequality in Equation (42) within the proof of Lemma C.1? The inequality follows from the Sherman–Morrison formula and a rearrangement of terms. We will add the intermediate steps here and carefully review the proof to include clearer explanations at each step. --- Thank you again for your positive and constructive review, which helps clarify and improve the presentation of the paper.
Summary: This paper proposes a Bayesian optimisation method with only human preference-based feedback instead of classical scalar values. The order-optimal sample complexities of conventional BO are recovered. That means the number of preferential feedback samples is of the same order as the number of scalar feedback. Claims And Evidence: The claims are supported by thorough analysis and some empirical studies. Methods And Evaluation Criteria: Yes, but the way to handle preferences is not clear. In Section 3, the main algorithm is introduced but how to train the utility function kernel ridge regression model is not clear. How is the preference data used for the model training? The comparative method is only MaxMinLCB. It would be better to include more related methods as baselines. Theoretical Claims: No, I did not check the proof process in detail. The claimed results look reasonable to me. Experimental Designs Or Analyses: The comparative method is only MaxMinLCB. It would be better to include more related methods as baselines. For Ackley and Yelp data, do they satisfy the RKHS assumption? Supplementary Material: No Relation To Broader Scientific Literature: The main contribution of this work is the derived tighter performance guarantees compared with related works. Essential References Not Discussed: N/A Other Strengths And Weaknesses: One concern is the novelty of the proposed algorithm in Section 3 where the preference-based function learning is mainly based on the existing approaches reviewed in Section 2.3. The other concern is the experimental evaluation where only one comparative method is given. Other Comments Or Suggestions: If the theoretical analysis is the main contribution of this work and algorithm design is not, I would suggest moving the section to subsection 2.4? The readers may mainly focus on the Section 4. Questions For Authors: Is there any difference between the proposed algorithm with the ones from the literature? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and constructive comments. Below, we provide detailed responses and will incorporate these suggestions to further improve the presentation of the paper. > In Section 3, the main algorithm is introduced but how to train the utility function kernel ridge regression model is not clear. How is the preference data used for the model training? The preference function prediction is described in Section 2.3. Specifically, we express the preference function in terms of parameters $\boldsymbol{\theta}$ as shown in Equation (8), and obtain these parameters by minimizing the regularized negative log-likelihood loss defined in Equation (9). From a theoretical standpoint, any optimization algorithm can be used for this minimization. In our experiments, we employ gradient descent, and we clarify the choice of learning rate under *Experimental Details* in the appendix. Thank you for highlighting this point—we will ensure that this clarification is made explicit throughout the paper, including in Section 3 following the algorithm description. > For Ackley and Yelp data, do they satisfy the RKHS assumption? The Ackley function is infinitely differentiable. By the equivalence between the RKHS of Matérn kernels and Sobolev spaces, it belongs to the RKHS of all Matérn kernels with smoothness parameter $\nu \ge 1.5$, which includes the Squared Exponential kernel. However, its RKHS norm is difficult to characterize explicitly. For the Yelp dataset, we cannot assert that the utility function belongs to the RKHS. We interpret our experimental setup as follows: the RKHS experiments (first row of Figure 1) exactly match our theoretical assumptions and support the theory; the Ackley function provides a diverse optimization landscape while still fitting the theoretical framework; and the Yelp dataset offers a real-world test of practical utility. > One concern is the novelty of the proposed algorithm in Section 3 where the preference-based function learning is mainly based on the existing approaches reviewed in Section 2.3. > Is there any difference between the proposed algorithm with the ones from the literature? The key novelty and contribution of our work is the performance guarantees (regret bounds and sample complexities) we establish, which achieve near-optimality. We establish the key result that the number of preference samples required to identify near-optimal actions matches the order of scalar-valued feedback samples. This is in sharp contrast to prior works—existing bounds become vacuous in many cases of practical and theoretical interest, such as Matérn and Neural Tangent kernels. To achieve these results, we propose the MR-LPF algorithm. While MR-LPF and its analysis build on the well-established literature on kernel methods and Bayesian optimization, it introduces important algorithmic and analytical innovations. In particular, the multi-round structure enables precise control over the dependency on $\kappa$, as detailed in our analysis, and allows us to leverage the new confidence interval in Theorem 4.7 to obtain tighter bounds with respect to $T$. While inspired by Li & Scarlett (2022)’s work on standard BO, our algorithm and analysis differ due to the reduced preference feedback model, which introduces new challenges we address both algorithmically and analytically—including removing the dependency on the curvature of the nonlinear link function and confidence intervals for kernel methods from preference feedback. > The experimental evaluation where only one comparative method (MaxMinLCB) is given. We chose MaxMinLCB as the baseline because it is one of the two existing BOHF algorithms with theoretical guarantees (along with POP-BO), and Pásztor et al. (2024) show that it outperforms POP-BO and several heuristic methods. This makes it the strongest and most relevant baseline for comparison. Our goal is to demonstrate that our algorithm not only significantly improves theoretical guarantees but also performs well empirically. --- We thank the reviewer for the positive feedback and constructive comments. We believe that addressing these points will significantly enhance the clarity and quality of our paper.
null
null
null
null
null
null
Rethinking Addressing in Language Models via Contextualized Equivariant Positional Encoding
Accept (poster)
Summary: This paper proposes a new positional encoding method for LLMs, which could enhance the position-addressing ability of transformers. Permutation and orthogonal equivariance are also applied to enforce the positional encoding. This method demonstrates superior performance on various tasks, especially long-context tasks, such as passkey retrieval tasks. Claims And Evidence: the claims are supported by experiments results and theoretical proof. Methods And Evaluation Criteria: the authors propose context-aware positional encodings aiming to improve positional encoding and enhance the performance of LLMs during both pre-training and fine-tuning. Theoretical Claims: I checked the proof of the proposition, and I think no issues with it. Experimental Designs Or Analyses: I have checked the author’s experimental results and the baselines used for comparison. Regarding the need for more comparative methods on long-context tasks, I will elaborate on this specifically in the weakness. Supplementary Material: I reviewed More Related Work, Proof of Proposition and Additional Experiments. Relation To Broader Scientific Literature: This paper contributes to the broader scientific literature by introducing a novel context-aware positional encoding method for LLMs, enhancing their position-addressing ability through permutation and orthogonal equivariance. It builds upon prior work on positional encoding techniques like RoPE[1], ALiBi[2]. [1] Su, Jianlin, et al. "Roformer: Enhanced transformer with rotary position embedding." Neurocomputing, 2024 [2]Press, Ofir, Noah A. Smith, and Mike Lewis. "Train short, test long: Attention with linear biases enables input length extrapolation.” 2021. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: The authors propose a dynamic, context-aware positional encoding, which can be applied during both the pre-training and fine-tuning stages. They provide extensive experimental results in the main paper and appendix, demonstrating the effectiveness of the method across various tasks. Additionally, the paper is well-structured, making it easy to understand. Weakness: Existing positional encoding methods have introduced improvements to RoPE to better adapt it for long-sequence tasks, such as NTK-aware RoPE [4]. Could you provide comparative experimental results of this method on long-context tasks? [4] Peng, Bowen, et al. "Yarn: Efficient context window extension of large language models." 2023. Other Comments Or Suggestions: Several typos need to be corrected : 1. “superioroty” should be “superiority” in Line 76. 2. “fine-fining”should be “fine-tuning” in Line 399. Questions For Authors: Computing a positional embedding at each layer increases the computational cost for larger-scale language models. Is the added time acceptable, and does the proposed method have scalability? Please provide a time complexity analysis. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly thank Reviewer udZG for appreciating our contributions, providing valuable suggestions on improving the work, and supporting the acceptance of this work. We address the questions as follows. > W1: Existing positional encoding methods have introduced improvements to RoPE to better adapt it for long-sequence tasks, such as NTK-aware RoPE [4]. Could you provide comparative experimental results of this method on long-context tasks? Thank you for your suggestion. We have implemented YaRN as an additional baseline in our SCROLLS benchmark experiments. The training is currently in progress, and we will update the results once they are available. > Q1: Computing a positional embedding at each layer increases the computational cost for larger-scale language models. Is the added time acceptable, and does the proposed method have scalability? Please provide a time complexity analysis. As shown in Table 3, TAPE introduces <1% additional parameters and approximately 12% increased computational cost (FLOPs). Since scaling typically involves stacking these layers, the overhead remains proportional. We believe the modest increase in memory usage and ~10% additional compute time is a reasonable trade-off for the benefits provided.
Summary: This paper introduces TAPE ,a novel approach to enhancing position-based addressing in Transformers by dynamically adapting positional encodings across layers based on sequence context. TAPE ensures stability and robustness by enforcing permutation and orthogonal equivariance. Experimental results demonstrate that TAPE outperforms existing positional encoding techniques in language modeling, arithmetic reasoning, and long-context retrieval tasks. Claims And Evidence: Yes, the claims are. Methods And Evaluation Criteria: Yes, the proposed methods do. Theoretical Claims: Yes, I checked the formulas in Section 3. Experimental Designs Or Analyses: Yes, I checked the experimental results tables and their corresponding analysis. Supplementary Material: Yes, I checked Section B and Section D in the supplementary material. Relation To Broader Scientific Literature: The Positional Encoding proposed in this paper aims to unleash the power of position-based addressing, as existing methods have gradually weakened this capability[1,2]. [1]Roformer: Enhanced transformer with rotary position embedding. Neurocomputing,2024 [2]Sun, Yutao, et al. "A length-extrapolatable transformer." 2022 Essential References Not Discussed: No, there aren’t. Other Strengths And Weaknesses: Strengths: - The proposed method is novel and effective, with corresponding theoretical support. - The extensive experimental setup, especially in terms of performance on long-context tasks provides strong evidence of the effectiveness of the proposed method. Weaknesses: - The authors provide the running time of attention layers as experimental results in Table 4. However, since the proposed method updates positional features layer-wise through interactions and joint training with token representations in every Transformer layer, what is the experimental runtime during training and inference for large-scale language models? - Since the proposed method updates positional features layer-wise, how does this affect gradient in the model? Could the authors provide an analysis or empirical results on gradient changes to better understand the impact of this method on training process? Other Comments Or Suggestions: - Figure 3 could be represented using a different type of chart and would look more visually appealing if it occupies half a column. Questions For Authors: - In Table 2 and Table 4, the authors present experimental results showing that the proposed method performs well on long-context tasks. I am curious whether this strong performance in long-context scenarios has theoretical support. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly thank Reviewer poAq for appreciating our contributions, providing valuable suggestions on improving the work, and supporting the acceptance of this work. We address the questions as follows. >W1: The authors provide the running time of attention layers as experimental results in Table 4. However, since the proposed method updates positional features layer-wise through interactions and joint training with token representations in every Transformer layer, what is the experimental runtime during training and inference for large-scale language models? Thank you for your suggestion. We have included the experimental runtime for full model inference in the updated table below. | Method | TAPE | RoPE | FIRE | T5’s relative bias | | --- | --- | --- | --- | --- | | Samples Per Second | 58.6 | 71.8 | 33.4 | 46.8 | >W2: Since the proposed method updates positional features layer-wise, how does this affect gradient in the model? Could the authors provide an analysis or empirical results on gradient changes to better understand the impact of this method on training process? We appreciate this insightful question. During our experiments, we monitored gradient norms and did not observe significant differences compared to RoPE. However, as this aspect falls outside our primary focus (motivation, methodology, and significance), we did not conduct an in-depth analysis of gradient behavior. >Q1: In Table 2 and Table 4, the authors present experimental results showing that the proposed method performs well on long-context tasks. I am curious whether this strong performance in long-context scenarios has theoretical support. TAPE's superior performance in long-context scenarios can be attributed to two key factors: First, its learnable nature enables dynamic adaptation to varying context lengths. Second, as formally established in Proposition 3.1, our relative position encoding scheme possesses inherent generalization capabilities to unseen sequence lengths. In contrast, conventional positional encoding methods exhibit fundamental limitations in long-context settings, as they either rely on predetermined distance-decay patterns, or lack this crucial relativity property.
Summary: This paper introduces a new approach to processing language sequences using transformer blocks, where token features and positional embeddings are combined and contextualized. The authors extend traditional positional encoding by dividing it into multiple blocks, allowing for more flexible associations between tokens and their positions. They also ensure that their functions for token mixing and position contextualization are equivariant to permutations and orthogonal transformations, addressing limitations in existing models. The overall goal is to improve how transformer models process both token features and positional information in sequences. ## update after rebuttal I think this is a good paper and keep my positive score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, I check the proof of Proposition 3.1. Experimental Designs Or Analyses: I check all experiments. Supplementary Material: No supplementary material provided. Relation To Broader Scientific Literature: It provides a new way to handle positions in Transformer, which can be wildly applied to diverse domains. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths - Instead of a specialized architecture, this paper modifies attention and MLP layers of Transformer, allowing easy integration into existing transformers. - Contextualized positional embeddings allow the model to dynamically adjust how positional information is interpreted depending on the surrounding tokens, improving its ability to capture more nuanced relationships between tokens. - The authors conduct thorough evaluations on various tasks, including passkey retrieval, arithmetic learning, and training from scratch with different context lengths. Additionally, they provided clear and insightful visualizations of attention maps, which further enhance the understanding of their model's behavior. Weaknesses - I do not see any major weaknesses. One suggestion is about the study of hyper-parameters. I think the authors choose B, L, R to align with RoPE for comparison. Since the proposed embeddings are learnable, it would be interesting to see how those hyper-parameters could affect the performance. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer G28F's positive assessment of our contributions and strong endorsement for acceptance. The reviewer provided one suggestion, to which we respond below: > I do not see any major weaknesses. One suggestion is about the study of hyper-parameters. I think the authors choose B, L, R to align with RoPE for comparison. Since the proposed embeddings are learnable, it would be interesting to see how those hyper-parameters could affect the performance. Thank you for your valuable suggestion. We have conducted hyperparameter exploration (detailed in the Appendix D), and our experiments demonstrate that the RoPE-like initialization yields the best performance among the configurations we tested.
Summary: This paper proposes a new method for learnable positional encodings, where they are allowed to depend on context/content. The positional encodings, termed TAPE (“conTextualized equivariAnt Position Encoding”), can be added to pre-trained transformers, with only the TAPE-relevant parameters fine-tuned. TAPE is permutation and orthogonal equivariant and uses higher order tensors, and performs well in experiments spanning arithmetic reasoning, long-context retrieval, and language modeling. Claims And Evidence: L051, “This rigidity [the fixed distance dependence / locality bias] limits the ability of positional encodings to model long-range dependencies and makes it challenging to attend to distant query-key pairs.” Is there a citation or experiment to support the claim that it is the positional encodings, in particular, that make long-range dependencies hard for LLMs? L686: “To the best of our knowledge, we are the first to introduce equivariance in language models, recognizing the symmetry in positional embeddings.” This is a strong claim. Is this not already done by RoPE, as noted in the paper? Methods And Evaluation Criteria: Yes, although the highlighted benchmark datasets seem to be chosen as tasks where TAPE is expected to perform well (arithmetic and long context). However, other datasets are included in the appendix. Theoretical Claims: n/a Experimental Designs Or Analyses: The authors explain the success of TAPE in terms of permutation and orthogonal equivariance, as well as the tenderized representations, but only ablate orthogonal equivariance — this is the only area I see for improvement, as the other experiments seem very thorough. Supplementary Material: Yes, all of it Relation To Broader Scientific Literature: The paper presents a novel positional encoding, building on RoPE and others. It draws on insights from papers like Ebrahimi et al 2024 and Sinha et al 2022. Essential References Not Discussed: There are other references that deal explicitly with general group equivariance in positional encodings for geometric applications, but I don’t think they are essential. Other Strengths And Weaknesses: Strengths: The proposed positional encodings, TAPE, perform well compared to baselines on arithmetic and long-context tasks. They also admit an efficient implementation, and enable parameter-efficient fine-tuning that works better than LoRA and LongLoRA on passkey retrieval. Several additional experiments that I might have requested were already in the appendix, including ablations of orthogonal equivariance, and evaluation on other LLM tasks where long context is not necessarily the main challenge. Weaknesses: The motivation of the technique is a bit confusing. The authors claim that relative positional encodings are crucial for “stability and generalization to varying sequence lengths” (L223-224), and use relative positional encodings in their formulation (equation 6) and prove that the transformer is invariant to token index shift (Prop 3.1), but then highlight an arithmetic task where absolute positions are necessary. Also, the use of tensors is not very well-motivated. Overall, although the experimental performance is seemingly very good, the design of TAPE seems a bit ad-hoc. The paper also cites geometric learning as inspiration — “This approach is inspired from the studies for geometric deep learning which processes graphs and point clouds by integrating token features with their geometric properties while preserving inherent physical symmetries” (L83) — but isn’t this property (positional encodings depending on relative distances) already satisfied by RoPE? Some additional ablations that could help clarify the design of TAPE include ablating the tensorial nature (reducing dimensions), ablating the dependence on context in the positional embeddings, etc. Other Comments Or Suggestions: I would recommend making Proposition 3.1 more general, by explicitly stating the necessary assumptions on E rather than immediately specializing to RoPE and random Fourier features. Also, would the result of Prop 3.1 not hold if f and g satisfied (4) and (5) for only R in the set of permutation matrices, rather than orthogonal matrices? This counterfactual would be good to include as part of the statement, if Prop 3.1 is supposed to motivate orthogonal invariance specifically. Figure 2: can the authors specify what the x and y-axis are in the figure caption? As a minor comment, there were a lot of typos. Here are some of them: Typos: L99, “adiditionaly” L77, “superioroty” L150, “transformer” -> “transformers” L162, “encoding” -> “encodings” L237, “conTexturalized” L246, “O(r)-invariance” —> “O(r)-invariant” L285, “test” —> “tested” L297: “arthimetic” L663, “quardratic” L672: “focus” -> “focuses” L792, “Contextulization” Questions For Authors: 1. The arithmetic task is one where absolute positions are necessary (L316), but TAPE uses relative positions (Prop 3.1) — how then does TAPE do well? (Clarification of TAPE’s advantages and understanding of experimental results) 2. Is there an indexing problem with equation 7? Don’t the two sums cancel each other out? (Clarification for assessing paper’s accuracy) 3. Do the same hyperparameter choices (dimensions, etc) work across different tasks? (Affects my evaluation of advantages/disadvantages of the method) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We greatly thank Reviewer qpJK for appreciating our contributions. We address the concerns as follows. > W1: The motivation of the technique is a bit confusing. The authors claim that relative positional encodings are crucial for “stability and generalization to varying sequence lengths” (L223-224), and use relative positional encodings in their formulation (equation 6) and prove that the transformer is invariant to token index shift (Prop 3.1), but then highlight an arithmetic task where absolute positions are necessary. We apologize for the inaccuracy in our original statement (Line 316) regarding absolute positions that caused confusion. Please refer to our response to Q1 to see whether this clarification addresses the point. --- >W2: Also, the use of tensors is not very well-motivated. Overall, although the experimental performance is seemingly very good, the design of TAPE seems a bit ad-hoc. We acknowledge that certain implementation choices in TAPE, such as the tensorial embedding design, were empirically driven to optimize performance. As noted in Appendix D, our ablation studies validate the effectiveness of these architectural decisions. --- >W3: The paper also cites geometric learning as inspiration — “This approach is inspired from the studies for geometric deep learning which processes graphs and point clouds by integrating token features with their geometric properties while preserving inherent physical symmetries” (L83) — but isn’t this property (positional encodings depending on relative distances) already satisfied by RoPE? Some additional ablations that could help clarify the design of TAPE include ablating the tensorial nature (reducing dimensions), ablating the dependence on context in the positional embeddings, etc. Yes, RoPE also satisfies this property, which is precisely why we employ it as one of our instantiation methods. We do not claim this property as our novelty, but rather use it as a principle to motivate our design choices regarding contextualized position embeddings. As shown in Appendix D, we provide ablation studies that: (1) validate the design of equivariant and tensorial embeddings, (2) analyze the effects of Attention and MLP layers, and (3) investigate the impact of hyperparameter choices. Note that ablating the context-dependence in our positional embeddings (our core design) reduces to RoPE, which is consequently included as a baseline in our experiments. --- > Q1: The arithmetic task is one where absolute positions are necessary (L316), but TAPE uses relative positions (Prop 3.1) — how then does TAPE do well? (Clarification of TAPE’s advantages and understanding of experimental results) We appreciate this opportunity to clarify. There was an inaccuracy in the original L316 statement regarding absolute positions - we have now corrected this, as absolute positions are not necessarily required for arithmetic. The critical factor is learning the relative importance of different positions within the sequence. As detailed in L328 onwards, TAPE is able to learn these position-dependent importance relationships within the task context. A detailed explanation is also attached: In arithmetic tasks, every digit has equal importance to the equation, regardless of its distance from the output. Traditional positional embeddings often assume a distance-decay effect, where words farther apart are less significant in the output. While this assumption is valid for most language tasks, it does not hold for arithmetic tasks. Positional contextualization enables dynamic reweighting of positional importance based on the task context, preserving effective distance decay for language tasks while addressing arithmetic contexts appropriately. This highlights TAPE’s potential advantages in arithmetic tasks. --- > Q2: Is there an indexing problem with equation 7? Don’t the two sums cancel each other out? (Clarification for assessing paper’s accuracy) No, the two sums in Equation 7 form a linear combination of vectors with weights summing to 1, making they cannot cancel out. --- > Q3: Do the same hyperparameter choices (dimensions, etc) work across different tasks? (Affects my evaluation of advantages/disadvantages of the method) Yes, we maintain consistent hyperparameters across all tasks (with L=R=2 in our main experiments). Additionally, Appendix D provides detailed ablation studies analyzing the impact of different hyperparameter choices. --- *Response to Suggestions:* 1. Prop 3.1: The orthogonality of R is indeed crucial for Proposition 3.1 to hold, as non-orthogonal transformations would violate the invariance properties demonstrated in Appendix B. While generalizing the assumptions is an interesting direction, we believe this extension merits dedicated future research. 2. Figure 2: The x- and y-axes represent the sequence lengths of the two operands respectively. 3. Typos: Thank you for your careful review. We have fixed all of them.
null
null
null
null
null
null
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
Accept (poster)
Summary: The paper introduces DocVXQA, a method that generates visual explanations (in the form of a mask) that highlight parts of documents that are relevant for OCR-free document question answering. DocVXQA builds on the Pix2Struct model and learns a mask that, when combined to the input image, must lead the document QA model to give the same answer, while optimizing for mask minimality. A prior is also given by ColPali (that provides a relevance score for text-image matchings) to improve the mask. Experiments are conducted on the DocVQA dataset using a fine-tuned Pix2Struct model. The method is evaluated with accuracy and ANLS (to measure the quality of answers), and pixel ratio (to measure mask minimality). A user-study is also conducted. ## update after rebuttal The rebuttal clarifies several things and answers my questions. I am now more inclined to recommend acceptance of the paper. Claims And Evidence: The explanations provided by the model are useful for users to verify the correctness of an answer by comparing it to the explanations. However, they do not genuinely explain the inner workings of the original model, Pix2Struct. Instead, they may give persuasive narratives that do not necessarily reflect the model's decision-making process, potentially leading to misleading interpretations. In the reviewer’s opinion, the final model cannot thus be considered fully "self-explanatory". The reviewer recommends adjusting the wording throughout the paper to prevent any misinterpretation regarding the model's explanatory capabilities. This concern is further supported by the design of the user study, which focuses on users' subjective satisfaction with the explanations rather than evaluating whether the explanations genuinely reveal the model's reasoning process, or help users to better understand the model (see “What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods”, Colin et al, NeurIPS 2022) The statement at the end of the introduction ("our method is inherently model-agnostic, allowing seamless integration with various DocVQA architectures") is not adequately supported by the experiments. Providing empirical evidence would make this claim more convincing. The encoder of Pix2Struct is applied on original documents, and masked ones. Since it was pretrained only on original documents, the reviewer suspects that a domain gap occurs when it is applied on the masked ones, hence the low utility scores, when only sufficiency and minimality losses are included. Still, as illustrated, in this ablation, the answer is generally visible, it should be easy to produce the correct one, even without all the context that is integrated back thanks to the ColPali prior. The paper does not explain why the decoder is kept frozen during training, while other parts (e.g., the encoder) are learned. What is the motivation here? Were other choices explored? Methods And Evaluation Criteria: The proposed metrics of Table 2 seem not enough to completely evaluate the method. For instance, using ColPali to have more context-aware explanations will increase pixel ratio without increasing utility. From this set of metrics only, it looks that the integration of ColPali is detrimental, while it looks reasonable and is well motivated. Theoretical Claims: There are no theoretical claims Experimental Designs Or Analyses: Yes. Supplementary Material: Briefly read Relation To Broader Scientific Literature: NA Essential References Not Discussed: The related work section is rather well written. Except that the organization between Sections 2.2 and 2.3 is unclear, with some overlap that appears arbitrary. A clearer distinction or restructuring would improve the flow of ideas. Other Strengths And Weaknesses: Strengths: * The paper goal of explaining document VQA model is sound and important for the community. * The proposed model is reasonable, the loss components are well motivated. * The paper is rather well written and motivated * The user-study was appreciated. Weaknesses: * The proposed model yields results that are very close to the ColPali + Pix2Struct baseline. The empirical improvement is marginal, while the method is much more complex (and has to be trained). Other Comments Or Suggestions: * The mandatory “Impact Statement” section is missing * The itemizations L.294-309 and L.437-400 do not follow the template format, as the bullets are positioned in the margin. Typos: * “are are” L.99 * “aspect the ratios” L.73 * “i. e.,” L138 * “Right” L.877 * L.284, a space is missing before the citation * The use of bold in Table 1 is inconsistent * “Section 4” L.314 (x2) - “Section 4.3” L.377 - “Section 4.3” L.409 → Please revise latex labels / references. * In Table 2, mismatch between “I” and “TI” * “were each shown”, L.428 Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: The authors sincerely appreciate Reviewer Y25T’s constructive feedback. We are pleased that the reviewer recognized the importance of explainability in Document VQA and the strong motivation behind our loss components. We also appreciate the positive remarks on the clarity and coherence of our writing and the value of our user study. We have carefully addressed all the points raised in the following, and we are grateful for the detailed identification of typos and missing content, which will be corrected in the revised manuscript 1. **"Self-explanatory" clarification**: We clarify that our intention is not to suggest that we reveal the DocVQA model internal workings. Rather, we use "self-explanatory" following established usage in explainable AI literature (Mersha et al., Neurocomputing'24), (Gautam, Boubekki et al., NeurIPS'22), describing models inherently designed to produce interpretable outputs during inference (L158–162, left). This contrasts with post-hoc methods, which retrospectively assess relevance between inputs and outputs on frozen models. We fully accept the reviewer’s recommendation and will clearly specify that self-explanatory refers exclusively to "the model’s capacity to produce interpretable outputs by design," not exposing internal workings. 2. **Model-agnostic claim is not supported**: As we also commented in response to reviewer ovki, our claim of model-agnostic nature relies on the independent, modular design of the explanation mechanism. Although currently integrated with Pix2Struct, its components, such as a learnable mask head, three loss terms of the explainability principles, and a postprocessing pipeline, are architecture independent. To empirically validate this, we are currently integrating our approach with Donut (Kim et al., ECCV'22), and results will be provided in the final manuscript. 3. **The encoder was pretrained only on original documents, the reviewer suspects that a domain gap occurs when it is applied on the masked ones. Using S+M, the answer is generally visible, it should be easy to produce the correct one, even without ColPali prior**: The reviewer points out low utility scores in the sufficiency and minimality ablation (S+M) despite visible answers in masked documents. We respectfully disagree. Although answers are sometimes visible, this is not consistently true. The domain gap noted by the reviewer is also reduced with finetuning on original and masked inputs. We identify aggressive masking as the primary factor lowering utility scores, enforcing minimality at the cost of sufficiency. This lack of context impairs predictions, highlighting the importance of integrating the ColPali prior through token interactions (Table 2). 4. **The paper does not explain why the decoder is kept frozen**: We freeze the decoder to preserve its pretrained text generation capabilities and maintain stable training outputs. Since the decoder is pretrained on original, unmasked documents, jointly finetuning it with the encoder and mask head introduces distributional shifts, causing instability. Instead, we focus learning efforts on the encoder and the new mask head. Noting that we did tried unfreezing the decoder, but it led to unstable optimization, slower convergence, and reduced overall performance. We will add these details to the final manuscript. 5. **Using ColPali to have more context-aware explanations will increase pixel ratio without increasing utility.**: Table 2 shows that using only sufficiency and minimality losses (S+M) yields accuracy of 0.19 and ANLS of 0.36. Integrating ColPali through token interactions (S+M+TI) significantly improves these metrics to accuracy 0.38 and ANLS 0.54. Although ColPali increases the pixel ratio, this reflects the preservation of essential contextual cues rather than irrelevant areas. Practically, this additional context is critical for robust predictions, accuracy and mask learning guidance. Qualitative human evaluations further confirm that ColPali-enhanced explanations are clearer and more context-aware, supporting a balanced trade-off between minimality and sufficiency. 6. **The proposed model yields results that are very close to the ColPali + Pix2Struct baseline**: Figure 3 illustrates that relying solely on quantitative metrics is insufficient for comprehensive evaluation. Although quantitatively our results appear similar to ColPali+Pix2Struct, qualitatively ours perform better. Following the reviewer’s suggestion, we conducted a human preference study comparing masks from both methods. In total, 12 participants evaluated 21 randomly selected question-answer pairs (252 trials). Our method was preferred in **163 trials (64.7%; 95% CI [58.4%, 70.6%], p << 0.001)**, with all participants (12/12) favoring our approach overall. Thus, our method is close to ColPali+Pix2Struct quantitatively, but it is offering a significantly more compact and interpretable explanations. This new study will be added to the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for the interesting rebuttal which clarifies several things and answers my questions. I am now more inclined to recommend acceptance of the paper. --- Reply to Comment 1.1.1: Comment: We thank reviewer Y52T again for their valuable feedback, which we believe will help improve the quality and clarity of our paper. We are grateful for the time invested in reviewing the manuscript and the rebuttal, and appreciate the updated score and the recommendation that it be accepted.
Summary: - This paper presents DocVXQA, a novel self-explainable framework for document question answering that learns to provide context-aware visual explanations. It builds on the Pix2Struct model and incorporates a learnable mask to enhance transparency. The approach is based on the information bottleneck principle and integrates additional pretrained models for better generalization. Experimental results show the effectiveness of the method in terms of both accuracy and interpretability. - The research question is important as it addresses the lack of transparency in current DocVQA models, which is crucial in high-stakes domains. - The key methodological innovation is the use of a learnable mask to generate context-aware explanations, which is formulated based on the information bottleneck principle. - The main findings are that the proposed method outperforms baseline techniques in terms of both accuracy and interpretability, and that the generated explanations are more context-aware and human-understandable. ## update after rebuttal Dear authors, These questions are written by real human, not generated by LLM. If you feel they are LLM-generated and choose not to answer them, then I have to respect your action. But I will not change my comments, or my evaluation. Claims And Evidence: (1) The proposed DocVXQA framework is a novel approach to visually self-explainable document question answering. (2) The use of the information bottleneck principle to formulate explainability principles is a theoretically sound approach. (3) The integration of additional pretrained models to enhance the generalizability of the explanations is a practical and effective strategy. (4) The extensive experimental evaluations, including human evaluation, provide strong evidence of the effectiveness of the method. Methods And Evaluation Criteria: Plz read Claims And Evidence. Theoretical Claims: Plz read Claims And Evidence. Experimental Designs Or Analyses: Plz read Claims And Evidence. Supplementary Material: Supplementary Material is read. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: (1) The paper could discuss the limitations of the proposed method and potential future research directions in more detail. How does the proposed method handle complex or ambiguous questions? What are the potential limitations of the proposed method in handling different types of documents? (2) The paper could provide more examples of the generated explanations to better illustrate the effectiveness of the method. (3) The paper could discuss the potential applications of the proposed method in different domains. What are the potential applications of the proposed method in different domains? How does the proposed method handle different types of data and tasks? (4) The paper could provide more details on the scalability of the proposed method. How does the proposed method handle noisy or incomplete data? What are the potential challenges in deploying the proposed method in practice? (5) The paper could provide more details on the relationship between the proposed method and other self-explainable models (e.g., Dib-x). How does the proposed method compare to other state-of-the-art methods in terms of performance and interpretability? Updated: Dear authors, These questions are written by real human, not generated by LLM. If you feel they are LLM-generated and choose not to answer them, then I have to respect your action. But I will not change my comments, or my evaluation. Code Of Conduct: Affirmed. Overall Recommendation: 2
Summary: This paper proposes DocVXQA, a novel self-explainable framework for Document Visual Question Answering (DocVQA), designed to not only answer questions from document images but also provide context-aware visual explanations via learned relevance heatmaps. The core contribution lies in integrating explainability criteria—namely sufficiency, minimality, and context-awareness—directly into the training process, using an additional mask-generation module guided by external pretrained vision-language retrieval models. Experimental evaluations on the DocVQA dataset demonstrate that DocVXQA effectively balances interpretability and prediction performance, surpassing several baselines in both quantitative metrics and human evaluations. Claims And Evidence: Most claims in the submission are supported by clear and convincing evidence. Specifically, the authors convincingly demonstrate that their proposed DocVXQA framework improves interpretability for document question-answering through both quantitative (accuracy, ANLS metrics, pixel ratio) and qualitative (human evaluation) experiments. However, two key claims lack sufficient evidence: 1. The paper claims that the proposed method is inherently model-agnostic, capable of integrating easily into various DocVQA architectures. However, the authors only demonstrate this with the Pix2Struct architecture, providing no experimental validation on other models. Without such validation, this claim remains theoretical and unsupported by evidence. 2. The authors implicitly present DocVXQA as a general framework suitable for broader document visual question-answering scenarios. However, all experimental evaluations were performed on a single dataset (DocVQA). Without evaluation on multiple datasets or diverse document styles, the evidence provided does not sufficiently support the claim of broad generalizability. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are generally appropriate for the problem. The mask-generation strategy and the use of sufficiency, minimality, and context-aware loss terms effectively address the interpretability objectives in Document VQA. The evaluation metrics (accuracy, ANLS, pixel ratio, and human evaluation) are suitable for measuring both interpretability and prediction quality. However, the evaluation is limited by relying solely on the DocVQA dataset, raising questions about the method's generalizability to other document types or scenarios. Expanding evaluations to additional diverse datasets would strengthen confidence in the broader applicability of the approach. Theoretical Claims: The paper does not present complex theoretical proofs. The theoretical framing provided—such as the formulation of the explainability objectives based on the Information Bottleneck principle and mutual information—is straightforward, well-established, and reasonably applied. I reviewed these theoretical formulations and found them conceptually sound, without identifying any significant issues. Experimental Designs Or Analyses: The experiments presented are generally valid and sound. The selection of evaluation metrics (accuracy, ANLS, pixel ratio) and comparison baselines (raw attention, attention rollout, Grad-CAM, ColPali-based retrieval) are appropriate for the DocVQA task and clearly aligned with the authors' interpretability objectives. However, several issues are noted: All experiments were conducted on only one dataset (DocVQA). The absence of experiments on additional datasets raises concerns about the generalizability of the results. The authors observed a performance drop (56% → 51%) due to interpretability constraints but did not provide sufficient analyses or discussions exploring the underlying reasons or possible approaches to alleviate this trade-off. Addressing this point explicitly would further strengthen the experimental analysis. Additionally, the authors use a fixed post-processing parameter (selecting the top-k=3 regions) for generating the explanation masks. However, the rationale behind choosing k=3 is not clearly explained or experimentally justified. Including sensitivity analyses or experiments to demonstrate how different values of k affect the interpretability and accuracy would strengthen the validity of this choice. Supplementary Material: The supplementary material includes implementation details (such as mask-generation architecture and hyperparameters) and additional qualitative examples. These details effectively support and clarify the methodological decisions and analyses presented in the main paper. Relation To Broader Scientific Literature: The paper relates clearly to existing literature on document VQA and explainable AI, specifically extending OCR-free approaches (e.g., Pix2Struct) and self-explainable methods based on information theory. Unlike post-hoc techniques (e.g., Grad-CAM), this work integrates interpretability directly into model training, contributing a novel, practical approach to improve transparency in document understanding. Essential References Not Discussed: All essential related works appear to be appropriately cited and discussed in the paper. Other Strengths And Weaknesses: The main strengths and weaknesses have been discussed in earlier sections. In summary, the paper's key strengths include its clear originality in integrating interpretability into document VQA and its strong motivation addressing real-world transparency needs. However, there are four main weaknesses that should be addressed: Experiments were conducted on only one dataset (DocVQA), limiting evidence of generalizability to other datasets or document scenarios. The proposed method was evaluated using only one model architecture (Pix2Struct), which leaves the "model-agnostic" claim unsupported by empirical evidence. The observed accuracy drop (from 56% to 51%) resulting from enhanced interpretability lacks thorough analysis or discussion. The choice of a fixed parameter (Top-k=3 regions) in post-processing lacks experimental justification or sensitivity analysis to confirm its optimality. Addressing these issues would strengthen the overall contribution and reliability of the findings. Other Comments Or Suggestions: Fix minor typos, such as: Line 073 :"preserve aspect the ratios" (should be "preserve the aspect ratios") Line 099 : "models are are reaching good performances" (duplicate "are") Line 103 : "text is extrtacted with OCR" (should be "extracted") Questions For Authors: 1. The experiments are performed only on the DocVQA dataset, so I cannot confirm the generalizability of your approach. Additional experiments on diverse datasets are needed. 2. The proposed approach was evaluated using only one model (Pix2Struct), so the claim of being "model-agnostic" is not supported. Additional experiments with other architectures would help validate this claim. 3. The performance drop (56%→51%) introduced by interpretability constraints is not sufficiently explained or analyzed. Additional analysis or experiments addressing this issue would strengthen the paper. 4. The choice of a fixed parameter (top-k=3 regions) in mask selection lacks experimental justification. Providing sensitivity analysis for different values of k is recommended. Ethics Expertise Needed: ['Discrimination / Bias / Fairness Concerns'] Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: The authors sincerely appreciate Reviewer ovki’s insightful comments. We are pleased that the reviewer recognized the paper’s key strengths, as he mentioned its originality in integrating interpretability into DocVQA, its effective balance between interpretability and prediction performance, and the "conceptual soundness" of our theoretical formulations. We have carefully considered all suggestions and made a dedicated effort to address each point thoroughly. 1. **I cannot confirm the generalizability of your approach. Additional experiments on diverse datasets are needed**: We have extended our evaluation beyond the standard DocVQA benchmark SPDocVQA (Mathew et al., WACV'21) by incorporating two additional datasets: InfographicVQA (Mathew et al., WACV'22) and PFL-DocVQA (Tito et al., ICDAR'24). These datasets pose greater challenges due to their complex and heterogeneous visual-textual structures, providing a more rigorous test of our model’s generalization capabilities. Some qualitative results on InfographicVQA are available here: https://anonymous.4open.science/r/RebuttalICML2-4D0C/README.md. The new results are demonstrating our model’s ability to handle these visually complex documents. As seen in the examples, our approach yields promising results, indicating strong potential for generalization. We will provide the full results on these new datasets in the final manuscript. 2. **The proposed approach was evaluated using only one model (Pix2Struct), so the claim of being "model-agnostic" is not supported**: As we also commented in response to reviewer Y52T, our claim of model-agnostic nature relies on the independent, modular design of the explanation mechanism. Although currently integrated with Pix2Struct, its components, such as a learnable mask head, three loss terms of the explainability principles, and a postprocessing pipeline, are architecture independent. To empirically validate this, we are currently integrating our approach with Donut (Kim et al., ECCV'22), and results will be provided in the the final version of the paper 3. **The performance drop (56%→51%) introduced by interpretability constraints is not sufficiently explained or analyzed.**: It can be attributed to the additional double-loop process employed by our approach. Even when the mask $M$ is set to all ones so that $X \odot M = X$, the model still performs two passes through the encoder—one for mask generation and another for final prediction. Consequently, during backpropagation, gradients from both passes are accumulated. In theory, if both passes were identical, this would be equivalent to a single pass. However, in practice, non-linearities, normalization layers, and other factors introduce slight variations between the two passes. These differences can cause over-accumulation or interference in the gradients, perturbing the weight updates relative to the original training regime. Such deviations, along with potential numerical instabilities from the extra pass, likely contribute to the modest performance drop observed from 56% to 51%. 4. **The choice of a fixed parameter (top-$k$=3 regions) in mask selection lacks experimental justification. Providing sensitivity analysis for different values of $k$ is recommended**: Following the reviewer’s suggestion, we conducted a sensitivity analysis on the hyperparameter $k$ that determines the number of regions selected during mask postprocessing. The results are summarized in the table below. As the table indicates, increasing $k$ leads to gradual improvements in both accuracy and ANLS, but it also results in a higher pixel ratio, which reflects a less minimal explanation. In terms of balancing utility and minimality, our experiments suggest that choosing $k=3$ or $k=4$ offers a good trade-off. That is, these values yield utility metrics comparable to higher $k$ settings while keeping the highlighted area compact. We will include the complete sensitivity analysis table in the Appendix of the revised manuscript. | k | ACC | ANLS | Pixel_Ratio | |----|------|------|-------------| | 1 | 0.36 | 0.52 | 0.21 | | 2 | 0.38 | 0.53 | 0.23 | | **3** | **0.38** | **0.54** | **0.24** | | 4 | 0.39 | 0.55 | 0.25 | | 5 | 0.39 | 0.55 | 0.26 | | 8 | 0.40 | 0.56 | 0.28 | | 10 | 0.41 | 0.57 | 0.29 | | 15 | 0.42 | 0.58 | 0.31 | | 20 | 0.43 | 0.58 | 0.32 | | 50 | 0.45 | 0.59 | 0.38 | | 100| 0.45 | 0.60 | 0.40 | --- Rebuttal Comment 1.1: Comment: Thank you for the additional experiments. The explanation of the performance drop and the K sensitivity analysis are well addressed. The validation for the model-agnostic claim would benefit from complete quantitative results. Therefore I will maintain my current score of weak acceptance. --- Reply to Comment 1.1.1: Comment: We thank reviewer ovki for the feedback. We present in this response further results on a new dataset that is added during this second phase of rebuttal. Also, we address the remaining concern of ovki about the model-agnostic nature of our approach. 1. **Additional experiments on diverse datasets are needed.**: In the first phase of rebuttal we showed some results with InfographicVQA (Mathew et al., WACV'22). In this phase, we conducted further experiments using a new dataset, PFL-DocVQA (Tito et al., ICDAR'24). This dataset presents increased challenges due to its more complex document layouts and varied visual quality, such as low-resolution images. Despite these difficulties, our method demonstrates consistent performance. Table 1 presents results on the PFL-DocVQA benchmark. Compared to the best performing baseline, our approach achieves significantly higher Accuracy (0.43) and ANLS (0.66), highlighting its improved explainability through more precise textual alignment. Moreover, the Pixel Ratio is notably reduced to 0.22, suggesting that our model identifies and relies on a considerably smaller subset of the input, reinforcing its ability to perform minimal evidence reasoning. While the overall utility is lower than that of the unmasked, non-explainable setting, this trade-off aligns with our design goal of enhancing interpretability. These results underscore the effectiveness of our method in producing concise, human-interpretable rationales while maintaining meaningful predictive performance. Some qualitative results using this dataset can be found here: https://anonymous.4open.science/r/ICML_Rebuttal3-68DE/README.md | Method | Mask Threshold | ACC | ANLS | Pixel Ratio | |-------------------------------|----------------|------|------|--------------| | Pix2struct (Unmasked) | -- | 0.80 | 0.92 | 1.00 | | Ours (Unmasked) | -- | 0.57 | 0.79 | 1.00 | | ColPali+Pix2struct | 0.5 | 0.28 | 0.39 | **0.18** | | Ours | 0.7 | **0.43** | **0.66** | 0.22 | **Table 1: Results on PFL-DocVQA** 2. **Additional experiments with other architectures would help validate this claim.**: In response to the remaining concern about the model-agnostic nature of our approach, we are conducting additional experiments using Donut (Kim et al., ECCV'22) as an alternative backbone to Pix2Struct, in order to validate our claim. It is important to highlight that Donut and Pix2Struct differ significantly in how they process input. Pix2Struct renders the question directly onto the image, whereas Donut tokenizes the question and uses it to condition the decoder during generation. As a result, we made necessary method—specific adjustments, modifying our mask head to also take the encoded question as input. Given the limited time during the rebuttal phase we are still doing the experiment, but, we have already obtained preliminary results that support the model-agnostic capability of our method. These can be found in this link: https://anonymous.4open.science/r/RebuttalICML4-065C/README.md We are continuing to train and evaluate the model, and we will include the full results in the final manuscript.
Summary: First, it introduces DocVXQA, a novel self-explainable framework that not only answers questions about documents but also provides visual explanations highlighting relevant regions that justify the answers. Second, it quantitatively formulates explainability principles (sufficiency and minimality) as explicit learning criteria using an information bottleneck approach, creating a balance between contextual relevance and representation efficiency. Third, it proposes an end-to-end training strategy that maintains compatibility with existing DocVQA architectures while adding minimal architectural changes. ## Final recommendation and Justification (post rebuttal) The authors have provided substantial details in response to my comments, particularly by incorporating new datasets to address concerns around generalization. I am changing my rating to weak accept and expect that the authors include these additional details and results in the final version. Claims And Evidence: The submission makes several key claims: 1. DocVXQA provides context-aware visual explanations for document question answering, going beyond simply highlighting answer locations. 1. The framework delivers explanations that are both contextually sufficient yet representation-efficient, creating a balance between comprehensive context and minimizing irrelevant information. 1. The method outperforms baseline explanation techniques in both utility (accuracy/ANLS) and explainability (clarity and conciseness). 1. Human evaluators prefer DocVXQA's explanations for their context-awareness and clarity. Methods And Evaluation Criteria: The proposed methods make sense for explaining document question answering. The approach balances showing enough information to understand the answer while keeping explanations simple. The testing approach has some weaknesses - they should have tested with more people and different types of documents. Also, the big drop in accuracy when using explanations suggests this might not work well in real-world settings where getting the right answer is critical. Theoretical Claims: The paper makes no theoretical claims Experimental Designs Or Analyses: which ones, and discuss any issues. 1. The paper fails to justify why they use different thresholds for their method compared to previous approaches. This inconsistency raises questions about fair comparison methodology. 1. The paper lacks a proper ablation study showing how ColPali and Pix2Struct perform individually, making it difficult to assess the actual contribution of their proposed integration. 1. There is a confusing presentation in the results tables where higher Accuracy is stated as better, yet the baseline numbers in the top row appear to outperform many of the tested approaches. 1. The authors inexplicably use different evaluation metrics than those presented in the original ColPali and Pix2Struct papers, hindering direct comparison with established benchmarks and raising concerns about cherry-picking favorable metrics. 1. The results section is particularly difficult to follow since none of the tables are properly referenced in the text, leaving readers to guess the relationships between claims and supporting evidence. 1. The paper fails to demonstrate generalizability by not evaluating performance on standard tasks used in previous works, such as Screenshot Parsing Pretraining, AI2D, and Screen2Words, which were core evaluation benchmarks in related literature. Supplementary Material: I reviewed the supplementary material, which provides valuable additional insights on several aspects of the work. The appendix details the implementation specifics, including a thorough explanation of the postprocessing methodology that enhances the clarity and interpretability of the visual explanations. It also contains helpful visualizations of the mask generation and refinement process, clearly illustrating how the system progressively improves its explanations. The hyperparameter settings are well-documented, enabling reproducibility of the results. Particularly useful were the additional qualitative results demonstrating the effects of token interactions, which help clarify how the model avoids overfitting to answer regions and generates more context-aware explanations. Relation To Broader Scientific Literature: DocVXQA builds upon several key research areas. It extends beyond simple attention visualization techniques (Chefer et al., 2021, https://arxiv.org/abs/2103.15679) by developing contextually sufficient explanations rather than just highlighting answer locations. The token interaction approach connects to knowledge distillation in vision-language models (Faysse et al., 2024, https://arxiv.org/abs/2407.01449). While innovative, the work misses opportunities to relate their challenges with mutual information estimation to similar challenges in explainable AI research and would benefit from more rigorous human evaluation protocols (Doshi-Velez & Kim, 2017, https://arxiv.org/abs/1702.08608). Essential References Not Discussed: I am not aware of any missing references. Other Strengths And Weaknesses: ## Strengths: The paper's strengths lie in its novel approach to a significant problem in document AI - the lack of transparency in document visual question answering. The paper effectively identifies and addresses the limitations of conventional attention-based explanations that merely highlight answer regions without proper context. The quantitative formulation of explainability principles (sufficiency and minimality) provides a sound theoretical foundation. ## Weaknesses: First, the performance degradation from unmasked (56% accuracy) to masked models (38% accuracy) represents a substantial trade-off that undermines real-world applicability. Second, the heavy reliance on post-processing (background removal, bounding boxes) obscures how much improvement comes from the learning approach versus these enhancement techniques. Fourth, the human evaluation lacks statistical rigor with 26 participants evaluating only 10 examples each. Other Comments Or Suggestions: Typo on Line 99- redundant are Questions For Authors: 1. How does the method perform on documents with complex layouts or multiple tones of writing? 1. What are the computational costs compared to baseline approaches? 1. Can the approach be extended to other vision-language understanding tasks? 1. How robust is the method to variations in document image quality? 1. What are the limitations in handling highly complex or densely populated documents? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: The authors thank Reviewer 9NT1 for their feedback. We appreciate the recognition of our approach’s novelty in addressing the challenge of transparency in DocVQA systems, and the acknowledgment that our approach provides a strong theoretical foundation through the quantitative formulation of explainability principles. Below, we address all of 9NT1’s concerns. 1. **What is the performance on documents with complex layouts or multiple tones of writing? What are the limitations in handling highly complex or densely populated docs?**: In the first case, we found that our model still performs well and produces good explanations. In densely populated documents, our model can provide fragmented explanations, which are hard to interpret. See qualitative results here: https://shorturl.at/nstj0. We will discuss these results in the final manuscript. 2. **What are the computational costs compared to baseline approaches?**: Our method requires additional training compared to post-hoc approaches. However, at inference time, the additional computational cost is only the forward pass through the small mask proposal head (591361 parameters). All baseline comparisons are performed with respect to the Pix2Struct base model with 282M parameters, aside from ColPali, which itself uses 2,9B parameters. 3. **Can the method be extended to other vision-language understanding tasks? Why using of different metrics than ColPali and Pix2struct?**: The masking-based method can be readily generalized to other vision-language tasks. Nevertheless, the primary goal of our work is to provide explanations tailored for the DocVQA task, rather than aiming at making Pix2Struct itself explainable. For the metrics used, we clarify that our specific focus is on DocVQA, thus retrieval metrics such as NDCG (used in ColPali) are not applicable. We adopt standard DocVQA metrics that are also used by Pix2Struct for QA tasks. 4. **What is the relation of your work with mutual information estimation?**: We appreciate the reviewer’s reference to Doshi-Velez and Kim (2017). Our core motivation closely corresponds with both the data-driven methodology for uncovering interpretability factors (Section 4.1) and the functionally-grounded evaluation outlined in Section 3.3. We concur with the suggestion of adopting an application-grounded evaluation framework; we plan to apply our methodology to domain-specific DocVQA tasks in future research. 5. **How robust is the method to variations in document image quality and why there is a degradation of performance on masked images?**: SPDocVQA contains some images of bad quality, thus our model is robust to these variations, some qualitative results here: https://shorturl.at/mqbzg. Regarding the performance degradation from masked to unmasked models: the trade-off between accuracy and explainability is a well-documented phenomenon (Crook et al., REW'23, Bell et al., ICPS'22), and several approaches have been proposed to balance this trade-off (Luo et al., BJR Open'19, Mori et al., ESE'23). As illustrated in Fig. 4: lowering the threshold decreases explanation quality but improves utility. This shows that the observed performance gap is an inherent constraint in explainability rather than a fundamental limitation of our approach. We also want to clarify also that the 38% is the accuracy of masked images; for clear images, the accuracy is 51%. 6. **Why using different thresholds and why there is a heavy reliance on post-processing**: We adjust the thresholds so that each technique operates at its optimal setting, to ensure a fair comparison between competing methods. The post-processing steps are primarily intended to enhance human interpretability of the masks, and do not significantly affect the core performance of our model. As shown in Figure 4, the observed gains are primarily attributable to the underlying learning approach rather than to the post-processing. 7. **The paper lacks a proper ablation study showing how ColPali and Pix2Struct perform individually**: We would like to clarify that it is impossible to evaluate each technique's individual impact, as they are designed to be used only in combination for XDocVQA. 8. **The baseline numbers in the top row appear to outperform many of the tested approaches.** We would like to clarify that these are not the baselines but rather upper bounds (unmasked, no explanation). We will make this distinction clear in the paper. 9. **Human evaluation lacks statistical rigor with 26 participants evaluating only 10 examples**: It is challenging to include more documents because we have to redo the study, but we have increased the number of participants to 42, and found that the results remained consistent as seen in this table: | Method | Context ↑ | Clarity ↑ | |-----|------|------| | Raw Attn (0.25)| 2.90 ± 0.48| 2.75 ± 0.42| | Raw Attn (0.50)| 2.26 ± 0.54| 2.34 ± 0.60| | ColPali+Pix2Struct| 3.97 ± 0.25| 3.02 ± 0.42| | Ours (0.7)| **4.49 ± 0.26**|**3.56 ± 0.41**| --- Rebuttal Comment 1.1: Comment: The rebuttal does not adequately address several key concerns—most notably point 3 on generalization and robustness to other vision-language tasks. Critical experimental gaps need to be addressed in the paper (points 1, 2, 5, 9). Additionally, the writing still needs work for clarity and flow (points 4, 6, 7, 8), affecting the readability of the core contributions. While a major revision is likely needed for acceptance, the paper could still make progress by running experiments on other vision-language tasks, and adding essential qualitative and ablation results to better support their claims. The paper could be improved by adding datasets like InfographicVQA and PFL-DocVQA (ovki-1). Additionally, the model-agnostic claim remains unsubstantiated (ovki-2), since all evaluations are still centered on a single backbone (Pix2Struct). Adding results from other backbones would strengthen the paper. --- Reply to Comment 1.1.1: Comment: We thank reviewer 9NT1 for the continued engagement and feedback. Below, we provide detailed responses to the remaining concerns. 1. **The writing still needs work for clarity and flow**: In the final version, we will thoroughly revise the manuscript to address this concern. This will include restructuring paragraphs to ensure a more logical progression of ideas, refining the language and transitions to enhance readability, and incorporating new experimental results and insights that emerged during the rebuttal process. 2. **The paper could still make progress by running experiments on other vision-language tasks**: We deliberately restrict our study to DocVQA, a downstream task within the broader domain of Document Intelligence (DI), which also includes document layout analysis, OCR, summarization, etc. Our focus on DocVQA is driven by its natural compatibility with saliency map-based explanation methods. In this task, highlighting semantically relevant regions directly facilitates the question-answering process and enhances interpretability in a manner that is both intuitive and practically effective. Although the proposed approach could, in principle, be extended to other DI or vision-language tasks, each of these introduces distinct challenges that demand task-specific adjustments, especially in the token interactions loss design. Designing a unified explainability framework that accommodates the diverse contextual and semantic characteristics of multiple downstream tasks represents a significant and separate research effort. Accordingly, we intentionally confine our scope to DocVQA to provide a more focused and rigorous contribution. Future work could build upon our findings to generalize across the broader vision-language landscape. A further reason for our choice lies in DocVQA practical significance. It is already being deployed in high-stakes domains such as banking, healthcare, and public administration, where automated systems are used to extract information from documents with the goal of minimizing human intervention. In such critical settings, ensuring the trustworthiness and interpretability of model outputs are of immediate and pressing importance. 3. **The paper could be improved by adding datasets like InfographicVQA and PFL-DocVQA** Following this suggestion, we perform additional experiments with new datasets. First, PFL-docvqa (Tito et al., ICDAR'24), where despite the more complex layouts and varying document visual quality (low resolution), our method performs consistency. Table 1 presents results on this benchmark. Compared to the baseline, our method achieves a significantly higher Accuracy (0.43) and ANLS score (0.66). In addition, the Pixel Ratio is drastically reduced to 0.22, indicating that our model identifies and relies on a substantially smaller subset of the input for its predictions. Although the overall utility is lower than the non-explainable unmasked performance, this tradeoff is consistent with our design goal of prioritizing interpretability and minimal evidence reasoning. Some qualitative results using this dataset can be found here: https://anonymous.4open.science/r/ICML_Rebuttal3-68DE/README.md. Second, we experimented with InfographicVQA (Mathew et al., WACV'22), this dataset is also challenging as it contain infographic data with very complex layouts, charts and tables. However, we foud that our model still performs good, as shown here: https://anonymous.4open.science/r/RebuttalICML2-4D0C/README.md. The full results will be added to the final manuscript. | Method | Mask Threshold | ACC | ANLS | Pixel Ratio | |-----|---|---|------|----| | Pix2struct (Unmasked) | -- | 0.80 | 0.92 | 1.00 | | Ours (Unmasked) | -- | 0.57 | 0.79 | 1.00 | | ColPali+Pix2struct | 0.5 | 0.28 | 0.39 | **0.18** | | Ours | 0.7 | **0.43** | **0.66** | 0.22 | **Table 1: Results on PFL-DocVQA** 4. **Adding results from other backbones would strengthen the paper**: In response to this concern, and in line with the suggestion from Reviewer ovki-2, we conducted additional experiments using Donut as an alternative backbone to Pix2Struct in order to evaluate the model-agnostic nature of our approach. It is important to highlight that Donut and Pix2Struct differ significantly in how they process input. Pix2Struct renders the question directly onto the image, whereas Donut tokenizes the question and uses it to condition the decoder during generation. As a result, we made necessary method—specific adjustments, modifying our mask head to also take the encoded question as input. Given the limited time during the rebuttal phase we are still doing the experiment, but, we have already obtained preliminary results that support the model-agnostic capability of our method. These can be found in this link: https://anonymous.4open.science/r/RebuttalICML4-065C/README.md We are continuing to train and evaluate the model, and we will include the full results in the final manuscript.
null
null
null
null
null
null
Nonparametric Teaching for Graph Property Learners
Accept (spotlight poster)
Summary: The paper introduces GraNT, a novel paradigm that applies nonparametric teaching principles to accelerate the training of graph property learners (specifically GCN). By establishing a theoretical link between traditional parameter-based gradient descent and functional gradient descent, the authors design a greedy algorithm that selects a subset of graphs—those with the largest prediction discrepancies—to expedite convergence. Extensive experiments on both graph-level and node-level tasks across several benchmark datasets demonstrate that GraNT significantly reduces training time while preserving generalization performance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. The paper effectively bridges nonparametric teaching with GCN training, offering a fresh perspective by interpreting the evolution of GCNs through the lens of functional gradient descent. Theoretical Claims: The overall proof seems correct. Yet, I didn't go through the details. The issue is that the proofs are primarily based on (two-layer) GCNs, a special case of a relatively old baseline. Extending the theoretical framework and empirical validation to more recent architectures (e.g., Graph Attention Networks) would strengthen the claim of general applicability and broaden the impact of the work. Experimental Designs Or Analyses: The current evaluation compares only different variants of GraNT. Including comparisons with established efficient GNN models and state-of-the-art GNNs would better contextualize the contributions and highlight the advantages of the proposed method. Supplementary Material: I didn't see any supplementary material available. Relation To Broader Scientific Literature: NA Essential References Not Discussed: One major issue of this paper is that while the author claims to solve the efficiency, non of the efficient GNN papers are discussed nor compared. Other Strengths And Weaknesses: Although the paper claims that training GCNs is costly, it would be more compelling if it detailed how current efficient graph libraries (e.g., DGL, PyG) fall short in addressing these challenges. A more comprehensive discussion of experimental settings and implementation specifics would clarify how GraNT overcomes these limitations. Besides, the code and data are not available. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for many constructive comments. We are deeply appreciative of the reviewer’s efforts to help us improve our paper. We take all comments seriously and try our best to address every raised concern. We sincerely hope that our response can resolve your concerns. Any follow-up questions are welcome. **[Theoretical Claims]** Sorry for the confusion. We would like to clarify that the theoretical analysis does not rely on the GCN being limited to two layers; it can be of any depth. Additionally, our empirical validation includes experiments with different depths and widths, as shown in Table 4. We agree with the reviewer that expanding the theoretical framework and empirical validation to more recent architectures would enhance the broader applicability of the work. We also believe that the concept and analysis behind GraNT have significant potential for wider extensions. Analyzing GCNs is an important foundational step in applying nonparametric teaching for graph property learning, which paves the way for further advancements. However, this is beyond the scope of the current paper and will be explored in future research. In response to your inquiry, we have applied the GraNT algorithm to GIN [1] on the ogbg-molhiv dataset and will provide further discussion on this in the revised version. | | GIN [1] w/o GraNT | GIN [1] w/ GraNT (B) | GIN [1] w/ GraNT (B) | |---|--------------|-----------|--------------| | Training Time | 1446.91 | 982.66 (-32.09%) | 965.37 (-33.28%) | | ROC-AUC | 0.7694 | 0.7714 | 0.7705 | **[Experimental Designs Or Analyses]** Thank you for your constructive feedback. We present comparisons of GraNT with recent works [2-6] below. Our results show that, with theoretical justification, GraNT outperforms these methods in terms of both training time and specific metrics. We will provide further discussion on this in the revised version to better emphasize the context and contributions of our work. **ogbg-molhiv** | | GMoE-GCN [2] | GMoE-GIN [2] | GDeR-GCN$^\alpha$[3] | GDeR-PNA$^\alpha $ [3] | GCN [4] | GCN+virtual node[4] | GraNT (B) | GraNT (S) | |---|------|----|----|----|----------|------|-------|------------| | Training Time | 3970.16 | 3932.06 | 1772.23 | 5088.88 | 2888.80 | 3083.16 | 1457.39 | 1597.69 | | ROC-AUC | 0.7536 | 0.7468 | 0.7261 | 0.7616 | 0.7385 | 0.7608 | 0.7676 | 0.7705 | $^\alpha$: batch_size=500, retain_ratio=0.7. **QM9** | | MOAT$^\gamma$ [5] | AL-3DGraph$^\delta$ [6] | AL-3DGraph$^\zeta$ [6] | AL-3DGraph$^\tau$ [6] | GraNT (B) | GraNT (S) | |---|------|--------|-----|------|-------|-------| | Training Time | 69000 | 9200.27 | 9364.74 | 12601.77 | 6392.26 | 7076.37 | | MAE | 0.0236 | 0.7991 | 0.4719 | 0.1682 | 0.0051 | 0.0051 | $^\gamma$: The training time is calculated as time_per_epoch * epochs = 92 * 750. The MAE value reported is from the paper. $^\delta$: lr=5e-5, batch_size = 256, which matches our settings. $^\zeta$: lr=5e-4, batch_size = 256. $^\tau$: lr=5e-4, batch_size = 32, which corresponds to the default settings used in the provided code for that paper. [1] How powerful are graph neural networks?. – ICLR’19. [2] Graph mixture of experts: Learning on large-scale graphs with explicit diversity modeling. – NeurIPS’23. [3] GDeR: Safeguarding efficiency, balancing, and robustness via prototypical graph pruning. – NeurIPS’24. [4] Semi-supervised classification with graph convolutional networks. – ICLR’17. [5] Graph Prompting for 3D Molecular Graphs. - CIKM’24. [6] Empowering Active Learning for 3D Molecular Graphs with Geometric Graph Isomorphism. – NeurIPS’24. **[Essential References Not Discussed]** Sorry for the confusion. We would like to clarify that the discussion on learning efficiency (Chen et al., 2018; Liu et al., 2022b; Zhang et al., 2023c) is included in the Related Works section, covering topics like normalization (Cai et al., 2021), graph decomposition (Xue et al., 2023), and lazy updates (Narayanan et al., 2022). We will enhance the presentation and include more detailed discussion in the revised version. **[W1]** Thank you for your suggestions. We would like to clarify that this paper is algorithmic in nature, introducing GraNT, a novel paradigm that improves the learning efficiency of graph property learners (GCNs) through nonparametric teaching. Our implementation leverages the PyG graph library, which is designed to optimize compilation efficiency, a concern that is independent of the main focus of our work. **[W2]** We have included the pseudocode in Algorithm 1, which outlines the core idea of our implementation, and our experiments are based on this. Detailed experimental settings and implementation specifics can be found in Appendix C. The code and data will be made available. --- Rebuttal Comment 1.1: Comment: The authors did a great job in addressing my concerns. I sincerely appreciate their effort for adding new experiments. Since my concerns are addressed. I've raised my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your constructive feedback. We're pleased that our response has addressed your concerns. Should you have any follow-up questions, please feel free to reach out. We will certainly incorporate any new and useful results in the revision. We also appreciate your updated score.
Summary: This paper presents GraNT (Graph Nonparametric Teaching), a novel framework that improves the learning efficiency of graph property learners (GCNs) using nonparametric teaching. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: Not sure if this proposed approach can be applied to more GNNs beyond GCN. Essential References Not Discussed: No Other Strengths And Weaknesses: 1. Long training times due to slow convergence are common in many machine learning tasks, particularly in AI4Science applications, as explored in this paper. Therefore, I believe the proposed approach's improvement in learning efficiency is highly valuable. 2. Section 4 is well-structured, progressing logically from theoretical analysis to algorithm design, making the presentation clear and effective. 3. Graph Convolutional Networks (GCNs) are a fundamental GNN architecture, but more advanced GNNs have been developed with improved expressivity, scalability, and the ability to capture complex graph structures. Examples include MPNN, GAT, GIN, Graph Transformer Networks, as well as high-order and equivariant GNNs. These architectures are now more commonly used for complex regression and classification tasks at both graph and node levels, often requiring even longer training times. I’m curious whether the proposed approach can also be applied to these more advanced GNNs beyond GCN. Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the encouraging comments and constructive suggestions. We sincerely thank the reviewer's efforts for helping us improve the paper. We hope that our response resolves your concerns. **[W3]** Very thoughtful question! We believe the idea and analysis behind GraNT have significant potential for broader extensions. As a fundamental neural network architecture for graph property learning, analyzing GCNs is an essential step in applying nonparametric teaching within this context before exploring further extensions. As mentioned in the Concluding Remarks and Future Work section, exploring other GNNs beyond GCN could be an intriguing direction. This could involve examining the core mechanisms of specific GNNs through the GNTK theory. Some relevant works on GNTK that might be insightful are [1,2]. However, this is outside the scope of the current paper and will be addressed in future work. In response to your curiosity, we have applied the GraNT algorithm to GIN [3] on the ogbg-molhiv dataset and will include additional discussion on this in the revised version. | | GIN [3] w/o GraNT | GIN [3] w/ GraNT (B) | GIN [3] w/ GraNT (B) | |---|--------------|-----------|--------------| | Training Time | 1446.91 | 982.66 (-32.09%) | 965.37 (-33.28%) | | ROC-AUC | 0.7694 | 0.7714 | 0.7705 | [1] Graph neural tangent kernel: Convergence on large graphs. – ICML’23. [2] Graph neural tangent kernel: Fusing graph neural networks with graph kernels. -NeurIPS’19. [3] How powerful are graph neural networks?. – ICLR’19.
Summary: In this paper, the authors innovatively introduce a training paradigm termed Graph Nonparametric Teaching (GraNT) designed for graph property learners. Their main idea is to reinterpret the training of GCNs through the lens of nonparametric teaching, which selects training examples (graphs) strategically to accelerate learning. The authors demonstrate that parameter-based gradient descent updates in GCNs align with functional gradient descent in nonparametric teaching. Experiments show that the proposed approach can reduce training time to some extent across various graph-level and node-level tasks (regression and classification) without sacrificing much generalization performance. Claims And Evidence: The main claims are supported by both analytical derivations and extensive experimental validation across multiple datasets (e.g., QM9, ZINC, ogbg-molhiv, ogbg-molpcba). Methods And Evaluation Criteria: The method is logically sound. The evaluation criteria make sense to me. Theoretical Claims: I didn't check completely the proof in the appendix. But it appears to me that the proof is well-presented. Experimental Designs Or Analyses: The experiment setup makes sense, but is not sufficient. I indeed have some concerns regarding the experiment as what will be elaborated below. Supplementary Material: No supplementary material provided. Relation To Broader Scientific Literature: The paper is well-positioned within the literature of graph neural networks. Essential References Not Discussed: NA. Other Strengths And Weaknesses: **Strengths:** - The theoretical integration of nonparametric teaching into graph property learning is original. - Experiments are performed on some large-scale graph benchmarks. - The presentation is clear and well-written. - Detailed derivations and additional results are provided in the appendix. **Weaknesses:** I do have some major concerns regarding the experiments of this paper. First of all, the comparative experiments are far from sufficient. The authors are suggested to compare GraNT with more GNNs. Also, it is known that the results on some datasets, such as ZINC, are sometimes unstable (see the benchmark paper [a]). It is suggested that the std be reported for the related results. This is critical because it can demonstrate whether the performance of GraNT is consistently better, especially given the method's reliance on gradient magnitude for teaching graph selection, which might be sensitive to label distribution imbalance and impact stability. [a] Benchmarking Graph Neural Networks. Furthermore, there is limited exploration of the algorithm's sensitivity to hyperparameters, such as the number of selected graphs in GraNT. Other Comments Or Suggestions: Minor typo: - Page 6, Line 295: “GNKT” should be corrected to “GNTK”. Questions For Authors: Please address my aforementioned major concerns on paper experiments, such as insufficient comparative experiments and lack of stability analysis. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the useful comments. We are deeply appreciative of the reviewer’s efforts to improve our paper. We take all comments seriously and try our best to address every raised concern. We sincerely hope that our response resolves your concerns. **[W1]** Thank you for your helpful feedback in improving our paper. Below, we present a comparison of GraNT with the recent works GMoE [1] and GDeR [2]. Our results demonstrate that GraNT outperforms these approaches in both training time and specific metrics. We will include a more detailed discussion of this comparison in the revision. **ogbg-molhiv** | | GMoE-GCN [1] | GMoE-GIN [1] | GDeR-GCN$^\alpha$[2] | GDeR-PNA$^\alpha$ [2] | GCN [3] | GCN+virtual node[3] | GraNT (B) | GraNT (S) | |---|------|----|----|----|----------|------|-------|------------| | Training Time | 3970.16 | 3932.06 | 1772.23 | 5088.88 | 2888.80 | 3083.16 | 1457.39 | 1597.69 | | ROC-AUC | 0.7536 | 0.7468 | 0.7261 | 0.7616 | 0.7385 | 0.7608 | 0.7676 | 0.7705 | $^\alpha$: batch_size=500, retain_ratio=0.7. **QM9** | | MOAT$^\gamma$ [4] | AL-3DGraph$^\delta$ [5] | AL-3DGraph$^\zeta$ [5] | AL-3DGraph$^\tau$ [5] | GraNT (B) | GraNT (S) | |---|------|--------|-----|------|-------|-------| | Training Time | 69000 | 9200.27 | 9364.74 | 12601.77 | 6392.26 | 7076.37 | | MAE | 0.0236 | 0.7991 | 0.4719 | 0.1682 | 0.0051 | 0.0051 | $^\gamma$: The training time is calculated as time_per_epoch * epochs = 92 * 750. The MAE value reported is from the paper. $^\delta$: lr=5e-5, batch_size = 256, which matches our settings. $^\zeta$: lr=5e-4, batch_size = 256. $^\tau$: lr=5e-4, batch_size = 32, which corresponds to the default settings used in the provided code for that paper. [1] Graph mixture of experts: Learning on large-scale graphs with explicit diversity modeling. – NeurIPS’23. [2] GDeR: Safeguarding efficiency, balancing, and robustness via prototypical graph pruning. – NeurIPS’24. [3] Semi-supervised classification with graph convolutional networks. – ICLR’17. [4] Graph Prompting for 3D Molecular Graphs. - CIKM’24. [5] Empowering Active Learning for 3D Molecular Graphs with Geometric Graph Isomorphism. – NeurIPS’24. **[W2]** Thanks for your help in improving our paper. We have included the results with standard deviations for all validations. This demonstrates that the proposed GraNT not only achieves training time efficiency with theoretical justification but also exhibits stability. We will fix it. | GraNT | Dataset | MAE ↓ | ROC-AUC ↑ | AP ↑ | |-------|-----------------|------------|-----------------|------------| | ✗ | QM9 | 0.0051$\pm$0.0009 | - | - | | ✗ | ZINC | 0.0048$\pm$0.0004 | - | - | | ✗ | ogbg-molhiv | - | 0.7572$\pm$0.0005 | - | | ✗ | ogbg-molpcba | - | - | 0.2579$\pm$0.0058 | | ✗ | gen-reg | 0.0007$\pm$0.0001 | - | - | | ✗ | gen-cls | - | 0.9150$\pm$0.0024 | - | | ✓ (B) | QM9 | 0.0051$\pm$0.0009 | - | - | | ✓ (B) | ZINC | 0.0048$\pm$0.0004 | - | - | | ✓ (B) | ogbg-molhiv | -| 0.7676$\pm$0.0036 | - | | ✓ (B) | ogbg-molpcba | -| - | 0.2580$\pm$0.0026 | | ✓ (B) | gen-reg | 0.0007$\pm$0.0001 | - | - | | ✓ (B) | gen-cls | - | 0.9157$\pm$0.0013 | - | | ✓ (S) | QM9 | 0.0051$\pm$0.0009 | - | - | | ✓ (S) | ZINC | 0.0048$\pm$0.0004 | - | - | | ✓ (S) | ogbg-molhiv | -| 0.7705$\pm$0.0027 | - | | ✓ (S) | ogbg-molpcba | -| - | 0.2579$\pm$0.0047 | | ✓ (S) | gen-reg | 0.0007$\pm$0.0001 | - | - | | ✓ (S) | gen-cls | - | 0.9157$\pm$0.0014 | - | **[W3]** Thanks for your help in improving our paper. The analysis of the hyperparameter, start ratio, under GraNT(B) is outlined below: We will include a more detailed discussion in the revision. |Dataset| Start_Ratio | 0.05 | 0.1 | 0.2 | 0.4 | 0.8 | full | |---|-------|------|-------|-------|-------|-------|-------| | QM9 | MAE | 0.0051 | 0.0053 | 0.0053 | 0.0053 | 0.0053 | 0.0051 | | QM9 | Training time (s) | 6392.26 | 6974.30 | 7918.51 | 10828.18 | 14081.66| 9654.81 | | ogbg-molhiv | ROC-AUC | 0.7546 | 0.7676 | 0.7652 | 0.7618| 0.7592 | 0.7572| | ogbg-molhiv | Training time (s) | 1362.09 | 1457.39 | 1719.43 | 2157.05 | 3173.92| 2163.5 | | gen-cls | ROC-AUC | 0. 9157| 0.9156 | 0. 9156| 0.9156| 0.9156| 0.9150| | gen-cls | Training time (s) | 6145.72 | 6237.92 | 6939.95 | 9459.22 | 13153.81 | 11662.25 | **[Typo]** Thanks for pointing it out. We will fix it in the revision.
null
null
null
null
null
null
null
null
Bayesian Active Learning for Bivariate Causal Discovery
Accept (poster)
Summary: This paper investigates the problem of identifying the causal direction between two variables, i.e., identify whether x->y or y-> x, through Bayesian active intervention. The paper addresses this problem as a hypothesis testing problem, with a testing statistic called "probability of decisive and correct evidence" ($P_{DC}$) that depends on the Bayes factor (likelihood ratio of the hypothesis over data D). It then proposes an iterative active learning strategy that selects intervention values of X in order to gain most evidence according to the $P_{DC}$ criterion. It also provides a dynamic programming optimization strategy for planning interventions over multiple steps that reduces the combinatorial search space. The paper then discusses 3 main experiments: (1) binary X, Y with two different causal directions in the form of Bernoulli distributions; (2) tree structured distribution and (3) a simulation environment with switch and light. In all 3 experiment environments, it compares their method with 3 baselines: random, infogain, and multi-step inforgain where it shows sample efficiency of their method over the baselines. Claims And Evidence: I am not sure whether I agree with the authors that the proposed test statistic $P_{DC}$ is valid for sample selection. From my understanding, the goal of the paper is to give a criteria for actively selecting new data to collect evidence for either direction, that is, each new selection of x_i is dependent on the historical trajectory. However, the statistic $P_{DC}$ depends on the Bayes factor which assumes complete factorization for each x_i, i.e., independence. Methods And Evaluation Criteria: The paper only discusses two very specific distribution classes for X and Y, for the continuous case, the paper uses Gaussian; and for categorical, it uses Beta-Bernoulli. It is unclear to me if the results generalize to other distributions, e.g. computational issues might arise that requires more sophisticated approximation and optimization techniques. Theoretical Claims: See above. Experimental Designs Or Analyses: The experiment setting seem a bit weak to me, in that all the experiments provided essentially only focus on the case of binary variables and Bernoulli distribution. Specifically, the paper discusses 3 main experiments: (1) binary X, Y with two different causal directions in the form of Bernoulli distributions; (2) tree structured distribution and (3) a simulation environment with switch and light. In all 3 experiment environments, it compares their method with 3 baselines: random, infogain, and multi-step inforgain. The "strength" of the causation is tuned by the magnitude of the difference in conditional probabilities. I'm not sure how the embodied AI switch-light reasoning task is more complicated than the binary example in 5.1 and 5.2, given the switches S_1, S_2, and lights L_1, L_2 are just indicator variables. The paper mentions the addition of latent confounder, but how does the confounder affect the dynamic and why does it make the problem harder? Moreover, in table 2, it is not clear that the paper's method performs better than the other methods, given the performances of the paper's method are inside the confidence set of the baseline performances for every single instance. Supplementary Material: I skimmed the supplementary material but did not read in full details. Relation To Broader Scientific Literature: The paper designs a Bayesian active learning technique to address the question of distinguishing causal direction of bivariate systems. The proposed selection criteria uses the idea of hypothesis testing to measure the strength of evidence in favor of the null hypothesis over the alternative. Essential References Not Discussed: n.a. Other Strengths And Weaknesses: n.a. Other Comments Or Suggestions: n.a. Questions For Authors: - The paper shows how to deal with tree-structured graphs. How would you adapt your method if the graph is not a tree but a more general DAG? - If the distribution $p(y\mid x)$ is misspecified, how does that affect the reliability of $P_{DC}$? - For experiment section 5.3, I'm not sure how the embodied AI switch-light reasoning task is more complicated than the binary example in 5.1 and 5.2, given the switches S_1, S_2, and lights L_1, L_2 are just indicator variables. The paper mentions the addition of latent confounder, but how does the confounder affect the dynamic and why does it make the problem harder? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your efforts and feedback regarding our paper. We address your concerns below. **Validity of PDC for sample selection.** Our framework is valid since the factorization of BF still holds if $x\_i$ depends on the historical data. Intuitively, this is because the dependency only happens to the selection of $\overline{x}$, while BF compares the ratio of two hypotheses that differ only in $p\\{y|\mathrm{do}(x)\\}$. Specifically, the joint probability $\mathbb{P}\\{\mathcal{D}\_m|\mathbb{H}\_0\\}$ is: $$\mathbb{P}\\{\mathcal{D}\_m|\mathbb{H}\_0\\} = \mathbb{P}\\{\overline{x}\_1\\}\mathbb{P}\\{\overline{y}\_1|\mathrm{do}(\overline{x}\_1)\\} \Pi\_{i=2} \mathbb{P}\\{\overline{x}\_i|\overline{x}\_{1:i-1},\overline{y}\_{1:i-1},\\}\mathbb{P}\\{\overline{y}\_i|\mathrm{do}(\overline{x}\_i)\\}$$ since $\overline{y}\_i$ only depends on the value of intervened $\overline{x}\_i$ during generation. We can derive a similar factorization for $\mathbb{P}\\{\mathcal{D}\_m|\mathbb{H}\_1\\}$. When computing the Bayes factor, the term $\mathbb{P}\\{\overline{x}\_i|\cdot\\}$ was canceled out by the numerator and denominator, leading to: $$\mathrm{BF}(\mathcal{D}\_m) = \prod\_{i=1}^m \frac{p\\{\overline{y}\_i|\mathrm{do}(\overline{x}\_i),\mathbb{H}\_0\\}}{p\\{\overline{y}\_i|\mathrm{do}(\overline{x}\_i),\mathbb{H}\_1\\}}$$ The next intervention value $\overline{x}\_{m+1}$ is then optimized based on $\mathcal{D}\_m:=\\{(\overline{x}\_i,\overline{y}\_i\\}$. In summary, while the selection of the intervention value depends on the historical information, the factorization still holds as we can cancel out these terms. **Distribution classes and generalizability.** As detailed in Appendix D.3, we do not restrict the model to be Gaussian. We have conducted experiments for uniform, normal, Student's t, and Gaussian mixtures and the results are shown in Figure 9. **Adapting to general DAGs.** As discussed in lines 283-284, our current version cannot work on general graph since it requires $p\\{y|\mathrm{do}(x)\\}$ to be identifiable from observational data. A promising solution is to extend $\mathbb{P}\_{\mathrm{DC}}$ as the ratio of maximum likelhood over the equivalence class: $$\mathrm{BF}\_{01} = \frac{\max\_{G \in \mathcal{G}^{(x,y)}\_0} \mathbb{P}\\{\mathcal{D}| G,\mathrm{do}(x),\mathbb{H}\_0\\}}{\max\_{G \in \mathcal{G}^{(x,y)}\_0} \mathbb{P}\\{\mathcal{D}| G,\mathrm{do}(x),\mathbb{H}\_1\\}}$$ where $\mathcal{G}^{(x,y)}\_0$ (*resp.* $\mathcal{G}^{(x,y)}\_1$) denotes the Markov equivalence class regarding $p\\{y|\mathrm{do}(x)\\}$ under $\mathbb{H}\_0$ (*resp.* $\mathbb{H}\_1$). Since $\mathcal{G}^{(x,y)}\_0$ (*resp.* $\mathcal{G}^{(x,y)}\_1$) encompasses the true graph under $\mathbb{H}\_0$ (*resp.* $\mathbb{H}\_1$), it recovers the true likelihood ratio. We will pursue this direction in the future. **Effect of distribution misspecification.** If $p\\{y|\mathrm{do}(x)\\}$ is misspecified, the $\mathbb{P}\_{\mathrm{DC}}$ would not be accurate. As claimed in lines 283-284, 432-434 (right column), our method requires $p\\{y|\mathrm{do}(x)\\}$ to be identified correctly. A possible solution is to update with interventional data, such as setting $k\_0 \leq 1$. We will explore this in the future. **Difference between switch-light task and simulation.** The switch-light task different from the one in simlation, as it involves confounding bias caused by $S\_1 \rightarrow L\_1$, $S\_2 \rightarrow L\_2$ or $S\_1 \rightarrow L\_2$, $S\_2 \rightarrow L\_1$. We present this to illustrate the applicability of our mehtod to this setting. **About table 2.** As claimed in line 411, we only say our method is competitive to others. Overall, we achieve comparable and better perfomance than other baselines across all tasks.
Summary: This paper presents a Bayesian active learning framework for identifying causal directions between variables through interventional strategies. Different from traditional information-theoretic approaches, it introduces an objective based on Bayes factors, which directly quantify the strength of evidence supporting one causal hypothesis over another. The authors formulate a sequential intervention objective that balances immediate and future evidential gains to optimize intervention selection. To manage the computational complexity, they propose a dynamic programming algorithm that makes multi-step intervention planning tractable. The framework is tested across bivariate systems, tree-structured graphs, and embodied AI reasoning tasks, showing superior performance over baselines like InfoGain and random strategies. This approach effectively improves causal direction discovery, especially under limited budgets and weak causal influences. [Attention: Kindly note that the paper slightly exceeds the 8-page main-paper limit by one line.] Claims And Evidence: Yes. The core part of this paper is the Bayesian active learning framework. Some preliminary information is clearly introduced in Section 3, and the core methodology is mentioned in Section 4 in details. Methods And Evaluation Criteria: Yes, this paper is focusing on bivariate causal discovery task. The proposed method formulates this causal direction identification as a hypothesis-testing problem, and proposes a Bayes factor-based intervention strategy, which is interesting and totally making sense. As for the evaluation, it remains unclear why only Type I error and Recall (no considering Type II error, Precision, F1) are used in the evaluation. Theoretical Claims: Yes, the only theoretical claims in this paper are Proposition 4.1 and Proposition 4.2. I have checked the correctness of these two Propositions. Experimental Designs Or Analyses: Yes, the experiments mainly cover bivariate causal discovery and tree-structure causal graph learning as synthetic experiments, and also causal reasoning in embodied AI as real-world experiments. Supplementary Material: Yes, the supplementary material is mainly covering 4 parts: The connection between $P_{DC}$ and information gain, multi-step optimization, application to multivariate causal discovery, and experiments. I quickly go through all the contents. Relation To Broader Scientific Literature: Traditional approaches in active causal discovery have predominantly used information-theoretic objectives, while this paper shifts from information gain to Bayes factor-based objectives, which is somewhat a new aspect. Essential References Not Discussed: Yes. (1) The paper emphasizes decision-focused active learning via Bayes factors, but does not discuss classic Bayesian decision-theoretic foundations, for example, [Chick'2006]. (2) Quite some papers, about decision-making under causal uncertainty, are not mentioned in the paper, such as [Gonzalez'2018]. [Chick'2006] Chick, S.E. “Bayesian Approaches to Optimal Decision Making Under Uncertainty”, 2006. [Gonzalez'2018] M. Gonzalez-Soto, et al. "Playing against Nature: causal discovery for decision making under uncertainty", 2018. Other Strengths And Weaknesses: **Strengths:** - With Bayes factor-based objective, this paper offers a direct and interpretable way to assess causal direction, addressing a major limitation of indirect mutual information-based methods. - When sequential optimization meets with dynamic programming, it provides a theoretically grounded and computationally efficient way to plan multi-step interventions, outperforming greedy or myopic baselines. - The proposed method is robust across settings. It demonstrates consistent performance in both weak and strong causation scenarios, making it suitable for diverse real-world applications. - It is scalable to Multivariate Causal Discovery, extending naturally to tree-structured graphs and multivariate problems, and showing flexibility and generalization potential. **Weaknesses and Comments:** - Heavily relying on accurate prior estimation, this paper requires prior specification from observational data, and does not update priors using intervention data, which could limit adaptation to complex systems. - Scalability to general graphs is still limited and challenging. While the method works well on tree structures, extension to general cyclic graphs or complex network structures remains a challenge. Other Comments Or Suggestions: Typos: - Line 399: At Table 2, "Table 2. Average number of interventions required in the switch-ligh reasoning task" -> "switch-light reasoning task"; - Line 424: "We develope a simulation environment based on TongSim" -> "We develop a simulation ..."; Questions For Authors: - Why only Type I error and Recall are used, while no considering Type II error, Precision, F1, in the experimental evaluations? - How robust is the Bayes factor-based intervention strategy if the true data-generating process violates causal sufficiency? - How sensitive is your method’s performance to these hyperparameters? e.g., $k_0$ = 10, $k_1$=0.1, how do you find those values? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your efforts and feedback in reviewing our paper. We address your questions below. **Why estimating priors using observation data?** We do not update priors using interventional data since if $k\_0 > 1$, it is difficult for $\mathrm{BF}\_{01}$ to be greater than $k\_0$ to identify $\mathbb{H}\_0$. To explain, even if $\mathbb{H}\_0$ is true, the data may exhibit a weak correlation between $X$ and $Y$. In such cases, it could happen that $\mathbb{P}(\mathcal{D}|\mathbb{H}\_1) \geq \mathbb{P}(\mathcal{D}|\mathbb{H}\_0)$, making it fail to identify $\mathbb{H}\_0$ with $k\_0 > 1$. A possible solution is to choose a lower $k\_0$, such as setting $k\_0 \leq 1$. We will explore this in the future. **Scalability to general graphs.** We can extend our method to the general graph by redefining $\mathbb{P}\_{\mathrm{DC}}$ as the ratio of maximum likelhood over the equivalence class: $$\mathrm{BF}\_{01} = \frac{\max\_{G \in \mathcal{G}^{(x,y)}\_0} \mathbb{P}\\{\mathcal{D}| G,\mathrm{do}(x),\mathbb{H}\_0\\}}{\max\_{G \in \mathcal{G}^{(x,y)}\_0} \mathbb{P}\\{\mathcal{D}| G,\mathrm{do}(x),\mathbb{H}\_1\\}}$$ where $\mathcal{G}^{(x,y)}\_0$ (*resp.* $\mathcal{G}^{(x,y)}\_1$) denotes the Markov equivalence class regarding $p\\{y|\mathrm{do}(x)\\}$ under $\mathbb{H}\_0$ (*resp.* $\mathbb{H}\_1$). Since $\mathcal{G}^{(x,y)}\_0$ (*resp.* $\mathcal{G}^{(x,y)}\_1$) encompasses the true graph under $\mathbb{H}\_0$ (*resp.* $\mathbb{H}\_1$), it recovers the true likelihood ratio. We will pursue this direction in the future. **Evaluation metrics.** In our causal hypothesis testing setting, where $\mathbb{H}\_1$ (causal relationship exists) is the primary concern in causal discovery, we focus on Type I error rate (probability of detecting $\mathbb{H}\_1$ when $\mathbb{H}\_0$ is true) and recall (probability of detecting $\mathbb{H}\_1$ when $\mathbb{H}\_1$ is true, which is 1 - Type II error rate). We have provided all metrics (Type I error rate, Type II error rate, precision, recall, F1) at https://sites.google.com/view/additional-figures-icml4692/. Our method achieves competitive precision while outperforming baselines on other metrics. We will include these comprehensive results in the final version. **Robustness to causal sufficiency violations.** We assume causal sufficiency since our method requires the identifiability of $p\\{y|\mathrm{do}(x)\\}$, as claimed in lines 283-284, 432-434 (right column). To relax this condition, we can estimate $p\\{y|\mathrm{do}(x)\\}$ using interventional data. In this regard, we should adjust the threshold for $\mathrm{BF}\_{01}$, as discussed in our response to the first question. **Threshold of Bayes Factor.** We choose following [1,2] that provided evidence categories for different thresholds. Following [1,2], we chose $k\_0=10$ as the threshold for "strong evidence". We also conducted experiments with $k\_0=30, 100$ ("very strong" and "extreme evidence") available at https://sites.google.com/view/additional-figures-icml4692/ and the results verified the utility of our methods. We will include these experimental results in the final version. **Additional references.** Thanks for your suggestions, we will add these references and discuss them accordingly. **References** [1] Jeffreys, H. Theory of Probability (3rd Edition). Oxford, University Press, 1961. [2] Schönbrodt, F. D. and Wagenmakers, E.-J. Bayes factor design analysis: Planning for compelling evidence. Psychonomic bulletin & review, 25(1):128-142, 2018. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses. Overall, I like the core idea of this paper, which formulates the causal direction identification as a hypothesis testing problem and makes use of Bayes factor-based intervention strategy. Therefore, I stand by my positive score.
Summary: This paper investigate the problem of determining the direction of relationships between variables by active intervention. The previous literature tries to maximize the information-theoretic gains for deciding the intervention value which may not effectively measure the reliability of direction determination. In contrast, this paper leverage the Bayes factor to formulate the objective function. To consider the multi-step intervention, this paper also propose a sequential intervention objective and develop a dynamic programming algorithm to solve it. The extensive experimental results show the effectiveness of the method in requiring less intervention and more correct direction. Claims And Evidence: I think the empirical comparison experiments and the method design support the claims. The method performs better in direction determination and can be applied in wide scenarios. Methods And Evaluation Criteria: I have some concerns about the design of methods. The details can be found in the Questions. The evaluation process makes sense. Theoretical Claims: I check the proof. However, I think the statement in Proposition 4.2 have some flaws. "Given $\bar{x}_{-k}^*$" should be modified as "Given $\bar{x}_{1:k-1}^*$". Experimental Designs Or Analyses: I think the experimental design and analysis make sense and clearly shows the effectiveness of the method. Supplementary Material: I do not review the supplementary material. Relation To Broader Scientific Literature: This paper investigate a classical problem of determining the causal direction. The main contribution is concentrated on the novel objective functions developed with Bayes Factors and an improvement to multi-step version. Essential References Not Discussed: I think the main references have been discussed. Other Strengths And Weaknesses: This paper investigate a solid and challenging problem. The proposed method is advanced, but left some concerns of the soundness. The details of the concern can be found in the questions. Other Comments Or Suggestions: No other comments. Questions For Authors: I have the following concerns about the method. 1. In equation (1b) and (1c), the probability is conditional on $D_m$. However, the definition of BF is not related to this condition. In contrast, in equation (6), this condition is missed. 2. In the next paragraph, how is the threshold of $\omega \frac{\gamma_0}{1-\gamma_0}$ derived? 3. In the objective function (8), the discount $\gamma$ is missed. 4. According to the Complexity Analysis subsection, the time complexity is exponential rather than polynomial. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your efforts and positive feedbacks regarding our paper. We address your concerns below. **Definition of $\mathbf{x\_{-k}}$ in Proposition 4.2.** Thank you for your clarification. We will correct this in the updated version. **Conditional Probability in equations (1b), (1c), and (6).** As mentioned in lines 201-202 (right column), we omit $\mathcal{D}\_m$ for simplicity. We apologize for any confusion and will highlight this in the updated version. **Threshold derivation.** Thank you for pointing this out. The threshold should be $\frac{\gamma\_0}{(1-\gamma\_0)\omega}$. According to Bayes formula, we have: $$\mathrm{BF}\_{01} = \frac{\mathbb{P}(\mathcal{D}|\mathbb{H}\_0)}{\mathbb{P}(\mathcal{D}|\mathbb{H}\_1)} = \frac{\mathbb{P}(\mathbb{H}\_0|\mathcal{D})\mathbb{P}(\mathbb{H}\_1)}{\mathbb{P}(\mathbb{H}\_1|\mathcal{D})\mathbb{P}(\mathbb{H}\_0)}$$ Since we require $\mathbb{P}(\mathbb{H}\_0|\mathcal{D}) > \gamma\_0$, and by $\omega:=\frac{\mathbb{P}(\mathbb{H}\_0)}{\mathbb{P}(\mathbb{H}\_1)}$, we should have $k\_0 = \frac{\gamma\_0}{(1-\gamma\_0)\omega}$. **Missing Discount Factor $\gamma$.** Thank you for pointing this out. We will correct it later. **Complexity Analysis.** The complexity is related to the number of intervention steps $K$, rather than $|\mathcal{X}|$. Since $X$ is discrete, the cardinality of the input space $|\mathcal{X}|$ is finite and remains constant once the setting is determined.
Summary: This paper focuses on causal discovery through Bayes factor. Instead of using information-theoretic gains to determine the direction of causal relationships, this paper adopts Bayes factor and formulating the task as a hypothesis testing. Furthermore, it uses sequential experiment design to selectively gather information for effective optimization. ## update after rebuttal: overall this work presents an interesting idea. The authors' responses help clarify my questions. I would like keep my positive score. Claims And Evidence: One claim is that: “we employ Bayes factors to establish an objective that directly optimizes the probability of obtaining decisive and accurate evidence, leading to more efficient causal discovery.”. This claim is supported by discussions on connection to information gain. However, it is not clear to me the real benefit by replacing traditional methods with Bayes factors. Methods And Evaluation Criteria: The proposed methods make sense to me. I think it is an interesting idea that formulating the task as a hypothesis testing problem. Theoretical Claims: I have checked the derivations showing connection to information gain. Experimental Designs Or Analyses: How does the proposed approach compare to (Castelletti & Consonni, 2024)? Supplementary Material: I have reviewed the supplementary material. I particularly checked Appx. A, C, D. Relation To Broader Scientific Literature: Improving efficiency of causal discovery is important in many real-world applications, especially involving reasoning. This work could lay a strong foundation for empowering AI systems with causal reasoning. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. It is interesting to formulate the task as an independent test. Is there any justification in using Bayes Factor instead of other techniques for performing independence tests? 2. How is the threshold of Bayes factor determined? Does the performance sensitive to the threshold? 3. In Table 1, three different sizes of sample set are considered. Considering the bivariable case, these sizes should be sufficient. How about further reducing the size of samples? Will the model remain robust under insufficient data? 4. For multivariate cases, the authors discussed 3-node graph in Appendix and studied tree-graph in the main paper. Are there any justifications like under which conditions the proposed approach work? In other words, how does the proposed approach generalize complex graphs with multiple variables and with loops. 5. What is the complexity of the proposed approach compared to existing methods. 6. How does the proposed approach compare to (Castelletti & Consonni, 2024) in practice? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your efforts and valuable feedback in reviewing our paper. Here are our responses. **About benefits of using Bayes Factor.** We choose the Bayes factor because it naturally aligns well with our goal and hence is more efficient for optimization. Bayes factors are commonly used in experimental design to assess the strength of competing hypotheses. When expressed in terms of $\mathbb{P}\_{\mathrm{DC}}$, it directly supports our objective of maximizing certainty and correctness in causal discovery. While maximizing $\mathbb{P}\_{\mathrm{DC}}$ can also increase the information gain, it is a transformation of $\mathbb{P}\_{\mathrm{DC}}$, and optimizing this transformation is less efficient. As shown in Figures 1 and 3, $\mathbb{P}\_{\mathrm{DC}}$ offers higher intervention efficiency compared to information gain. **Threshold of Bayes Factor.** We choose following [1,2] that provided evidence categories for different thresholds. Following [1,2], we chose $k\_0=10$ as the threshold for "strong evidence". We also conducted experiments with $k\_0=30, 100$ ("very strong" and "extreme evidence") available at https://sites.google.com/view/additional-figures-icml4692/ and the results verified the utility of our methods. We will include these experimental results in the final version. **Performance with small sample sizes.** We tested our method with $n=100$ and found it remains robust compared to the result with $n=1,000$. Below are the results for the Type I error rate (the proportion of cases where $\mathrm{BF}\_{10} > k\_1$ under $\mathbb{H}\_0$, which differs from the more conservative definition used in the paper: $\mathrm{BF}\_{10} > \frac{1}{k\_0}$ under $\mathbb{H}\_0$) and recall (the proportion of cases where $\mathrm{BF}\_{10} > k\_1$ under $\mathbb{H}\_1$): |$\|\mathcal{D}\_{\mathrm{obs}}\|$|Type I Error (Setting 1)| Recall (Setting 1) | Type I Error (Setting 2)|Recall (Setting 2)| |----------------------| ------------------------|------------------|------------------------|------------------| |$100$| $0.0205\pm0.0128$|$0.7595\pm 0.0483$|$0.0005\pm 0.0022$|$1.0000\pm 0.0000$| |$1000$| $0.0305\pm0.0186$|$0.8840\pm 0.0325$|$0.0000\pm 0.0000$|$1.0000\pm 0.0000$| These results show our method maintains high performance even with significantly reduced observational data. **Applicability to more complex graphs.** As discussed in lines 283-284, our current method requires $p\\{y|\mathrm{do}(x)\\}$ to be identifiable from observational data, in order to calculate $\mathbb{P}\_{\mathrm{DC}}$. As claimed in the "Discussion" section, this might not be achievable in multivariate causal discovery, as $p\\{y|\mathrm{do}(x)\\}$ cannot be uniquely determined among the Markov equivalent class. To resolve this problem, a promising solution is to extend $\mathbb{P}\_{\mathrm{DC}}$ as the ratio of maximum likelhood over the equivalence class: $$\mathrm{BF}\_{01} = \frac{\max\_{G \in \mathcal{G}^{(x,y)}\_0} \mathbb{P}\\{\mathcal{D}| G,\mathrm{do}(x),\mathbb{H}\_0\\}}{\max\_{G \in \mathcal{G}^{(x,y)}\_0} \mathbb{P}\\{\mathcal{D}| G,\mathrm{do}(x),\mathbb{H}\_1\\}}$$ where $\mathcal{G}^{(x,y)}\_0$ (*resp.* $\mathcal{G}^{(x,y)}\_1$) denotes the Markov equivalence class regarding $p\\{y|\mathrm{do}(x)\\}$ under $\mathbb{H}\_0$ (*resp.* $\mathbb{H}\_1$). Since $\mathcal{G}^{(x,y)}\_0$ (*resp.* $\mathcal{G}^{(x,y)}\_1$) encompasses the true graph under $\mathbb{H}\_0$ (*resp.* $\mathbb{H}\_1$), the maximum value recovers the true likelihood. We will pursue this direction in the future. **Computational Complexity.** Existing methods [3] adopted the greedy strategy. To avoid the local minima issue[4], they need to optimize the multi-step objective. The complexity is exponential. By using dynamic programming, we can reduce it to polynomial complexity. **Comparison with Castelletti & Consonni (2024).** The comparison may not be appropriate as we consider different settings. Castelletti & Consonni (2024) focuses on pre-experiment sample size determination without interaction with the environment. In contrast, we focus on determining the intervention value and can receive feedback from the environment at each iteration. **References** [1] Jeffreys, H. Theory of Probability (3rd Edition). Oxford, University Press, 1961. [2] Schönbrodt, F. D. and Wagenmakers, E.-J. Bayes factor design analysis: Planning for compelling evidence. Psychonomic bulletin & review, 25(1):128-142, 2018. [3] Toth C, Lorch L, Knoll C, et al. Active bayesian causal inference[J]. Advances in Neural Information Processing Systems, 2022, 35: 16261-16275. [4] Greenewald, K., Katz, D., Shanmugam, K., Magliacane, S., Kocaoglu, M., Boix Adsera, E., and Bresler, G. Sample efficient active learning of causal trees. Advances in Neural Information Processing Systems, 32, 2019.
null
null
null
null
null
null
Ensemble Distribution Distillation via Flow Matching
Accept (poster)
Summary: The paper presents an ensemble distribution distillation method leveraging flow matching to efficiently transfer knowledge from an ensemble teacher to a smaller student model. A new approach that models ensemble distribution distillation using flow matching, enabling student models to better capture the diversity of teacher ensembles. A theoretical formulation where flow matching is applied in logit space, optimizing a vector field to map student model predictions to the ensemble teacher’s distribution. Extensive experimental validation on image classification (CIFAR-10/100), commonsense reasoning (ARC, OBQA), and robustness evaluations under distribution shifts (CIFAR-10.1, STL, SVHN). Demonstration that the proposed EDFM (Ensemble Distribution Flow Matching) method outperforms previous ensemble distillation approaches, improving accuracy, calibration, and efficiency while reducing computational costs. Claims And Evidence: The authors claim that flow matching enables better ensemble distribution distillation by capturing ensemble diversity more effectively than previous methods. Supported by empirical results in diversity, fidelity, and robustness experiments that show EDFM consistently outperforming baselines. The paper claims that EDFM improves both efficiency and predictive performance compared to existing distillation techniques. Supported by runtime analysis showing that EDFM achieves strong scalability and efficiency with minimal computational overhead. It asserts that EDFM preserves ensemble teacher properties better than competing methods. Supported by fidelity analysis, where EDFM achieves the closest alignment with the ensemble teacher in various distance metrics (TVD, KLD, JSD). The paper claims that flow matching naturally captures ensemble diversity, but it does not include direct theoretical justification for why flow matching should perform better in this setting compared to prior distillation methods. Methods And Evaluation Criteria: The proposed flow matching approach is well-motivated and clearly formulated. The evaluation methodology is strong, using multiple baselines (e.g., KD, EnDD, FED, DBN) and comprehensive metrics (accuracy, calibration, fidelity, efficiency). The benchmarks (CIFAR, ARC, OBQA, STL, SVHN) are appropriate for the task. The paper does not provide ablation studies that analyze how different design choices (e.g., type of flow matching loss, student model architecture) affect performance. Theoretical Claims: The paper provides a theoretical formulation of flow matching for ensemble distillation. The derivation of the vector field for flow matching in logit space is consistent with prior work on normalizing flows. Theoretical justification for why EDFM better captures ensemble diversity is not fully developed. The claims about exponential convergence of flow matching (inspired by prior generative modeling work) are not rigorously derived for the ensemble distillation setting. Experimental Designs Or Analyses: The experiments are comprehensive, covering different dataset shifts, evaluation metrics, and baselines. The fidelity and diversity analyses are particularly strong, demonstrating how well the student model mimics ensemble predictions. Ablation experiments (e.g., varying the number of flow matching steps, testing different vector field parameterizations) are missing. The paper could include more real-world benchmarks beyond academic datasets to validate its practical effectiveness. Supplementary Material: The appendix includes background on transformers, additional proofs, and experiment details. The extended discussion on ensemble variance decomposition is useful. Some key comparisons to prior work (e.g., Zhang et al. 2023, Jiang 2023) are only in the appendix instead of the main text. Relation To Broader Scientific Literature: The paper effectively connects ensemble distillation to generative flow matching methods. It situates EDFM within the broader context of knowledge distillation and uncertainty estimation. The discussion of ensemble teacher diversity is well-aligned with prior work in Bayesian deep learning. The connection to recent advances in diffusion-based distillation (e.g., Schrödinger bridge methods) could be better explored. The paper does not address non-Bayesian perspectives on ensemble distillation, which could be valuable for robustness. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths Novelty: The paper introduces flow matching into ensemble distillation, which is a new and promising direction. Strong empirical results: EDFM consistently outperforms existing methods across multiple tasks and metrics. Efficiency: The method is computationally efficient, reducing the overhead of traditional ensemble models. Comprehensive evaluation: The paper rigorously tests EDFM under distribution shifts and real-world tasks. Weaknesses Limited theoretical justification: The paper lacks strong formal arguments for why flow matching should improve ensemble distillation. No ablation studies: The sensitivity of EDFM to hyperparameters is not analyzed. Comparison to recent generative modeling approaches (e.g., score-based methods) is missing. Some experimental design choices (e.g., why a specific type of flow matching loss is used) are not well explained. Other Comments Or Suggestions: The writing could be clearer in some sections, especially in theoretical explanations (e.g., the connection between flow matching and Bayesian ensembles). Ablation studies (e.g., varying flow matching step size, different vector field architectures) would strengthen the empirical claims. A more explicit comparison to diffusion-based distillation methods would be useful. Questions For Authors: How does EDFM perform when using fewer flow matching steps? Would reducing the number of steps significantly degrade performance? How does the choice of the vector field parameterization affect distillation quality? Have you experimented with different neural network architectures for the vector field? How does EDFM compare to alternative generative modeling techniques for distillation? Could score-based diffusion models be used instead of flow matching? What are the limitations of EDFM when scaling to very large student models? Does the method suffer from optimization difficulties when used with transformers? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and for recognizing our work as a new and promising direction. We are pleased that you found our extensive experimental results valuable and especially appreciate your recognition of our fidelity and diversity analyses, which are central to our contribution. We hope our responses address any remaining concerns—please reach out if you have any further questions. > Some key comparisons to prior work (e.g., Zhang et al. 2023, Jiang 2023)... We could not find Zhang et al. (2023) and Jiang (2023) in our manuscript. Could you kindly clarify which sections you are referring to? > The connection to recent advances in diffusion-based distillation... In the context of ensemble distribution distillation, DBN (Kim et al., 2024) represents a recent advance. While we discuss its relevance and differences in Section 3 (Related Work) and Appendix B.2 (Ensemble Distillation Methods), we will further clarify this in the camera-ready version. > The paper does not address non-Bayesian perspectives... Could you kindly clarify what is meant by “non-Bayesian perspectives”? If we understand correctly, point-estimating the mean of the ensemble teacher prediction would fall under non-Bayesian approaches, while ensemble distribution distillation, which models the distribution of the ensemble teacher prediction, would be considered Bayesian. In this case, KD and DBN would correspond to the former, and EnD, FED, and our EDFM to the latter. We would appreciate any further insights. > How does EDFM perform when using fewer flow matching steps... In the runtime analysis (Figure 4, Section 5.4), we compared NFE values of {3, 5, 7} (corresponding to 3, 4, 5 steps) and selected NFE=7 for the main tables based on the performance-cost trade-off. As shown, fewer flow matching steps (NFE=3) led to slight performance drops, though NFE=5 showed no significant degradation. We agree that further ablations on the number of steps, solver, and scheduler choices would strengthen the paper. As a follow-up, we conducted an ablation study on: 1) uniform vs. exponential schedules and 2) Euler vs. Heun methods across different step counts. The figure (https://imgur.com/a/fCd15ca) summarizes the results; filled markers indicate main table settings (Heun with exponential schedule, NFE=5). This underscores the value of carefully designing the sampling procedure with performance-cost in mind—as we have already addressed. Notably, the exponential schedule, rarely used in flow matching literature, was uniquely proposed and tailored for EDFM, proving critical in our distillation setup. We will include a separate section on hyperparameter sensitivity in the camera-ready version. > How does EDFM compare to alternative generative modeling techniques... Exploring alternative generative methods is an interesting future direction. We implemented EDM (Karras et al., 2022) to train the student network and compared it with our EDFM approach. In our CIFAR-10 setup, EDM achieved an NLL of 0.224 with 35 NFEs, while EDFM reached 0.216 with only 5 NFEs; EDM was unable to achieve reasonable performance with just 5 NFEs (it achieved NLL of 0.370). It clearly shows effectiveness and efficiency of our EDFM approach. > How does the choice of the vector field parameterization... > > ...optimization difficulties when used with transformers? During the rebuttal period, we conducted an ablation study on network architectures (MLP, Transformer, and U-Net) of varying scales. The results are as follows: 1) Larger Transformers (1.31M parameters) improve NLL and ECE but are not cost-effective, as evidenced by the “#Params” and “Wall-clock time” columns (where wall-clock time refers to the duration required for 256 ensemble predictions via batched inference). 2) U-Net underperforms compared to both MLP and Transformer, likely due to its 2D spatial processing being less suited for handling logits. These findings indicate that the MLP architecture is sufficient for ensemble distribution distillation, providing a latency advantage. Furthermore, no issues related to training instability were observed, even with large student networks or Transformers. Data|Arch|# Param|ACC / NLL / ECE|Wall-clock time(microseconds) -|-|-|-|- C10|MLP|0.14M|0.929 / 0.229 / 0.013|__39__ |||0.33M (Ours)|__0.931__ / 0.216 / 0.009|68 |||0.70M|0.930 / 0.218 / 0.011|132 ||Transformer|0.36M|0.930 / 0.218 / 0.011|204 |||1.31M|__0.931__ / __0.213__ / __0.007__|724 ||U-Net|0.44M|0.929 / 0.222 / 0.012|653 > type of loss We also conducted an ablation study on the type of flow matching loss, comparing x-prediction and v-prediction, to find that the latter consistently outperforms the former. For other formulations of flow matching, please refer to our comment to review AwX6. Data|Type|ACC / NLL / ECE -|-|- C10|x-pred|__0.931__ / 0.234 / 0.024 ||v-pred (Ours)|__0.931__ / __0.216__ / __0.009__ C100|x-pred|0.760 / 1.031 / 0.090 ||v-pred (Ours)|__0.761__ / __0.932__ / __0.056__ --- Rebuttal Comment 1.1: Comment: Thank you for the answers. I decided to raise my score to 3. --- Reply to Comment 1.1.1: Comment: Thank you for clearly expressing your positive stance! Your detailed questions have been invaluable in shaping our additional experiments. We sincerely appreciate your constructive review once again!
Summary: The paper presents an ensemble distillation method based on flow matching named EDFM. The core idea is to learn a mapping between Gaussian noise and the logits of a (Bayesian) teacher model conditioned on the input data. The authors first analyze the importance of diversity in the predictions of the teacher when learning the student model by incorporating several previously established methods such as Mixup augmentations and auxiliary datasets. Then, the authors examine 3 aspects of their approach compared to other distillation methods (ones that attempt to preserve the ensemble diversity and ones that don't), (1) the ability to mimic the ensemble teacher models, (2) the diversity in the students prediction, and (3) the generalization capabilities of the student. The proposed approach was examined using both image and text data. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Not relevant. Experimental Designs Or Analyses: The experimental design is solid. Supplementary Material: I skimmed through it. Not in a comprehensive manner. Relation To Broader Scientific Literature: Relation to broader scientific literature is good. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - The paper is written in a clear and cohesive manner. The importance of ensemble diversity is first analyzed and then each aspect of the distillation goals is inspected with proper empirical evidence. - In terms of prediction diversity EDFM outperforms all baseline methods in most cases. In addition compared to FED, the main competing approach, EDFM tends to exhibit better performance. - The proposed approach is efficient in terms of the storage required and inference time when using a proper student network. Weaknesses/Questions: - In terms of novelty, if I understand correctly, the paper is a combination of previous ideas in the distillation literature (FED and DBN), namely using generative models for distillation and capturing diversity in the teacher model. - The decision to model the logits as Gaussian random variables needs further justification. The probabilities after the Softmax layer are invariant to transformations of the logits such as shifting which is not consistent with the Gaussian assumption. Perhaps it will be more appropriate to use a different approach such CatFlow [1] which learns a flow directly in the probability space. - In terms of accuracy, EDFM does not generalize as well as EnDD and DBN on in-distribution and distribution shift tasks. Will EDFM generalization improve by incorporating a bigger student model? A further analysis may be required here. Perhaps a different design choice for the flow model (such as the one described in the previous bullet) can help as well. - I wonder how EDFM outperforms the teacher model by a large margin on text data. Contrasting it with image data, there the teacher model is usually better by a large margin (as also witnessed in the literature). I believe that further clarification and investigation is required here. - Minor: - Line 199 left column - did you mean $z_{t}^{x}$? - Missing citation of rectified flow paper in line 165 right column. - The meaning of the abbreviation EDFM is not defined. - Missing reference and discussion to Table 2 in the main text. [1] Eijkelboom, F., Bartosh, G., Andersson Naesseth, C., Welling, M., & van de Meent, J. W. (2024). Variational flow matching for graph generation. Advances in Neural Information Processing Systems, 37, 11735-11764. Other Comments Or Suggestions: See Strengths And Weaknesses. Questions For Authors: See Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the positive feedback highlighting our paper’s clarity and coherence. We hope our responses address any remaining concerns, and please reach out if you have any further questions. > the paper is a combination of previous ideas in the distillation literature We respectfully differ in our interpretation of the novelty claim and would like to reference Michael Black’s remark: "The novelty, however, must be evaluated before the idea existed." To the best of our knowledge, this is the first work to introduce flow matching as a novel framework for ensemble distribution distillation. Our work is a pioneering contribution that, through extensive experiments, demonstrates how flow matching can be both efficient and effective across three key aspects—diversity, fidelity, and generalization—in the context of ensemble distribution distillation. We believe this represents a clear and novel contribution to the community. > The decision to model the logits as Gaussian random variables needs further justification. Thank you for your insightful feedback on our Gaussian modeling of logits. We, too, initially explored combining logit geometry with flow matching. However, after preliminary investigation, we found "simple is best"—alternative formulations underperformed the simplest one. Notably, prior works such as Yun et al. (2023) and Kim et al. (2024) also predict logits instead of categorical probabilities. Below, we summarize results of revisiting this comparison under the current setup: Data|Space|Distribution|ACC / NLL / ECE -|-|-|- C10|Probability simplex|SFM|0.929 / 0.244 / 0.019 |||FM on simplex|0.929 / 0.243 / 0.015 ||Logit space|FM w/ ILR|__0.931__ / 0.218 / 0.011 |||FM (Ours)|__0.931__ / __0.216__ / __0.009__ - __Statistical Flow Matching (SFM).__ Many flow matching formulations on the probability simplex, including CatFlow (Eijkelboom et al., 2024), are inapplicable here as they assume discrete data, whereas we handle a "continuous" logit distribution. SFM (Cheng et al., 2025) is the most relevant to our situation in this line of work, which provides a method to map the uniform distribution into arbitrary simplex distributions. - __Flow Matching on Probability Simplex (FM on simplex).__ Alternatively, one can apply flow matching directly on the simplex, mapping the uniform distribution to the ensemble distribution. This is valid as the simplex itself is a Euclidean space. - __Isometric Log-Ratio Transform (FM w/ ILR).__ As you have sharply pointed out, teacher logits are invariant under scalar shifts, yielding identical softmax outputs. Thus, forcing the student to match teacher logits “exactly” may be overly strict. The ILR transform addresses this by mapping the probability simplex of dimension $D$ into a $D-1$-dimensional vector space, avoiding redundancy. Overall, applying flow matching directly to pre-softmax logits produces better results than modeling it on post-softmax categorical probabilities. One possible explanation is that the softmax operation distorts the informative scale of the logits, negatively impacting ensemble distribution distillation. For example, unnormalized density information, calculated as the log-sum-exp of logits (Grathwohl et al., 2020), may be lost due to the softmax transformation. > Will EDFM generalization improve by incorporating a bigger student model? … Perhaps a different design choice for the flow model can help as well. During the rebuttal period, we conducted an ablation study on network architectures (MLP, Transformer, and U-Net) of varying scales (please see our comment to reviewer r2cZ for detailed results). The result demonstrates that the performance saturates with relatively small students, and the model used in the paper is already enough. > I wonder how EDFM outperforms the teacher model by a large margin on text data. This relates to two points: 1) our EDFM adopting a pre-trained teacher network, and 2) a slight decrease in accuracy for Multi-IVON compared to IVON. Specifically, for language tasks, EDFM is trained using the frozen, pre-trained IVON@mean model, which we found to outperform Multi-IVON in accuracy, consistent with Cong et al. (2024). As a result, our method improves upon the IVON@mean baseline by effectively distilling the Multi-IVON teacher (slightly in accuracy, and significantly in NLL, ECE), thereby surpassing the teacher by a wide margin in accuracy, similarly to the IVON@mean model. Method|ARC-C|ARC-E|OBQA -|-|-|- IVON@mean|__0.710__ / 1.874 / 0.246|__0.888__ / 0.645 / 0.094|__0.809__ / 0.653 / 0.111 Multi-IVON|0.675 / __0.891__ / __0.091__|0.868 / __0.357__ / __0.026__|0.794 / __0.512__ / __0.032__ || EDFM (Ours)|0.715 / 1.113 / 0.142|0.892 / 0.390 / 0.044|0.818 / 0.537 / 0.046 > Minor We sincerely appreciate your thorough review (yes, $z_{t}^{x}$ is correct); we will make sure to adjust the mistakes pointed out in the camera-ready version. --- Rebuttal Comment 1.1: Comment: Thank you for the answers. I believe the authors properly addressed my comments and those of the other reviewers; hence, I decided to raise my score to 4. --- Reply to Comment 1.1.1: Comment: We are pleased that our additional clarifications and ablation results effectively addressed the reviewers' concerns. We will incorporate them into the final manuscript in a clear and well-organized manner. Thank you again for your insightful and supportive feedback!
Summary: This paper proposes a novel ensemble distribution distillation method (EDFM) that utilizes flow matching to efficiently transfer the diversity of ensembled teacher models to a smaller student model. Key challenges in ensemble distribution distillation are addressed, including the high computational cost of large integrations and the difficulty of capturing the full diversity of ensemble predictions due to the capacity constraints of student models. The proposed method introduces a lightweight network that learns to map individual model predictions to a vector field of integrated teacher prediction distributions, enabling fast and parallelizable inference. Extensive experiments demonstrate the effectiveness of EDFM compared to existing ensemble distribution distillation methods. Claims And Evidence: The claims made in the submission are supported by extensive experimental evidence, which is presented in a clear and detailed manner. The authors provide a comprehensive set of experiments across various tasks, including image classification and commonsense reasoning, to validate their proposed method, Ensemble Distribution Distillation via Flow Matching (EDFM). Here are some key points that support the claims: 1. The authors demonstrate that EDFM outperforms existing ensemble distillation methods in terms of accuracy, negative log-likelihood (NLL), and expected calibration error (ECE) across multiple datasets (CIFAR-10, CIFAR-100, ARC-C, ARC-E, OBQA). 2. The authors highlight the efficiency of EDFM in terms of runtime and computational cost. Experiment shows that EDFM scales well with the number of ensemble predictions, maintaining low execution times while improving performance. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem and application at hand. The authors address the challenge of ensemble distillation, which aims to transfer the knowledge from a computationally expensive ensemble of models to a single, more efficient student model. The proposed method, Ensemble Distribution Distillation via Flow Matching (EDFM), is designed to effectively capture the diversity of ensemble predictions and transfer this knowledge to the student model. Theoretical Claims: The submission does not contain explicit theoretical proofs that require verification. Instead, it focuses on empirical validation through extensive experiments. The authors do make several theoretical claims and provide justifications for their approach. I think it would be better to add a theoretical proof to the article. Experimental Designs Or Analyses: I reviewed the soundness and validity of the experimental designs and analyses presented in the submission. Overall, the experimental design is robust and well-structured, with appropriate benchmarks, metrics, and comparisons to validate the proposed method. However, the methods used in the comparisons are quite old. Supplementary Material: No. Relation To Broader Scientific Literature: The key contributions of the paper are deeply rooted in and extend the broader scientific literature in ensemble learning, knowledge distillation, generative modeling, uncertainty estimation, and scalability to large models. By introducing flow matching as a novel framework for ensemble distillation, the paper addresses several limitations of prior work, including the preservation of diversity, efficient sampling, and scalability to large models. The experimental results are also ok. Essential References Not Discussed: [1] Improved distribution matching distillation for fast image synthesis[J]. Advances in Neural Information Processing Systems, 2024, 37: 47455-47487. [2] Knowledge Distillation via Flow Matching[J]. [3] One-step diffusion with distribution matching distillation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2024: 6613-6623. Other Strengths And Weaknesses: This method presents some conceptual innovations, particularly in applying flow matching techniques to ensemble distillation tasks. However, flow matching itself is an existing technique, and the main contribution of this work lies more in integrating it with ensemble distillation rather than introducing theoretical or algorithmic breakthroughs. The authors should more clearly articulate their method's unique improvements over existing flow matching approaches (such as the design of conditional flow matching or efficient inference strategies) to better highlight its novelty. Furthermore, the Abstract and Methods are quite simple in writing. And all compared methods are very old. Other Comments Or Suggestions: No. Questions For Authors: NO. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the thoughtful review and clear understanding of our work, particularly in terms of diversity, efficiency, and scalability. We are glad that you appreciated our extensive experimental results and recognized our approach as both a novel framework and a conceptual innovation. We hope our responses address any remaining concerns, and please let us know if you have any further questions. > the methods used in the comparisons are quite old. > > the Abstract and Methods are quite simple in writing. We reviewed prior research on ensemble distribution distillation, from the EnDD in the seminal work of Malinin et al. (2020), which was later improved by Ryabinin et al. (2021), and the most recent FED by Penso et al. (2022) as baselines. We would greatly appreciate it if you could share further related recent methods in ensemble distribution distillation. Additionally, we will make sure to expand the abstract and method sections in the final version. > Essential References Not Discussed Thank you for sharing the relevant prior works that we may have overlooked. Shao et al., (2024) enhances knowledge distillation using flow matching, and DiffKD (Huang et al., (2023)), or KDiffusion (Yao et al., (2024)) proposed similar methods based on diffusion models. However, they are more similar to DBN (Kim et al., (2024)) rather than EDFM, as they align logits or features between teacher and student models one-to-one. Yin et al. (NeurIPS 2024, CVPR 2024) propose distribution matching distillation to distill diffusion models into one-step generators for faster sampling. As such, their goal and methodology differ from EDFM. Nonetheless, they offer valuable insights in the general context of knowledge distillation. We will include these connections in the camera-ready version. > The submission does not contain explicit theoretical proofs that require verification. As you mentioned, the main contribution of our work lies in introducing flow matching as a novel framework for ensemble distribution distillation, demonstrated through extensive experiments. Since our approach aligns with the standard flow matching scenario, we believe the theoretical claims from the existing flow matching literature apply to our work as well, without significant challenges (e.g., minimizing the conditional flow matching loss in Eq. (13) ensures the approximation of the ensemble teacher's predictive distribution $p_{1}$). > unique improvements over existing flow matching approaches (such as the design of conditional flow matching or efficient inference strategies) We believe our work is significant not only for introducing a pioneering and promising direction in ensemble distribution distillation, but also for empirically validating its effectiveness through extensive experiments, as recognized by the reviewers. Also, it demonstrates strong practicality by being highly efficient in terms of inference cost with the proposed parallelized inference strategy. To further solidify the position of our work as a pioneering study, we provide key takeaways within the ensemble distribution distillation context. Comprehensive ablation studies on design choices in flow matching for effective ensemble distillation help clarify this aspect. In response, we conducted additional ablations, including those requested by the reviewers, which further strengthen the contribution of our work. __Sampling algorithm.__ We conducted an ablation study on the sampling algorithm, focusing on the ODE-solver, time step scheduler, and the number of steps. (With the exponential scheduler, more steps are populated near $t=1$.) The plot (https://imgur.com/a/fCd15ca) shows the results, where the filled marker denotes the settings used in the paper. __Noise distribution.__ We conducted an ablation study on the noise distribution during training, considering three distributions: (1) uniform, (2) exponential, and (3) beta. The choice of exponential and beta distributions was inspired by the well-known lesson that the model should learn more near the data. We found that the beta distribution achieved better results in terms of ECE. Data|Distribution|ACC / NLL / ECE -|-|- C10|Uniform|__0.931__ / 0.219 / 0.011 ||Exp. (Ours)|__0.931__ / __0.216__ / 0.009 ||Beta|__0.931__ / __0.216__ / __0.008__ C100|Uniform|__0.761__ / 0.966 / 0.066 ||Exp. (Ours)|__0.761__ / __0.932__ / 0.056 ||Beta|0.760 / __0.932__ / __0.039__ __Type and formulation of flow matching loss.__ We ablated the flow matching loss by 1) comparing x-prediction and v-prediction (ours), and 2) comparing categorical probabilities and logits (ours). Please see our response to Reviewers AwX6 and r2cZ. __Architecture.__ We ablated the network architectures (MLP, Transformer, and U-Net) at varying scales. We found that the current MLP architecture strikes the best cost-performance trade-off; please see our response to Reviewer r2cZ.
null
null
null
null
null
null
null
null
Learning to (Learn at Test Time): RNNs with Expressive Hidden States
Accept (spotlight poster)
Summary: This paper casts the sequence modeling problem as a meta-learning problem at training time. The resulting model is a model which minimizes a loss, i.e. learns at test time. The authors show that Linear Attention and Attention are special instances in their Learning to Learn at Test Time Framework. Building on this framework the paper introduces new neural network layers called Test-Time Training (TTT) layers. The paper considers two instantiations of these layers TTT-Linear and TTT-MLP, whose hidden state is a linear model and a two layer MLP, respectively. In experiments with model sizes ranging from 125M to 1.3B parameters these novel layers match or exceed the performance of Transformer and Mamba baselines. ## Update after Rebuttal: The authors addressed all my questions and I still recommend acceptance of this paper. Claims And Evidence: The claims made by this paper are supported by convincing evidence. Methods And Evaluation Criteria: The choice training the models and baselines on the Pile dataset are reasonable. There exist newer, more cleaned open source datasets such as SlimPajama or DCLM Baseline, but these would very likely not change the overall results. Training on two different context lengths 2k and 8k underlines the fair benchmark setup. For 2k context length the FLOP difference between Transformers (quadratic compute scaling with sequence length) and RNNs (linear scaling with sequence length) is smaller, while for 8k it is larger. Theoretical Claims: I checked the proofs for Theorem 1 and Theorem 2 for correctness. Experimental Designs Or Analyses: Their experimental design is sound and their ablations in Table 1 shows careful model design. Supplementary Material: Not provided. Relation To Broader Scientific Literature: Even before publication/acceptance at a conference, this paper has inspired several other follow up works. Moreover, the paper unifies several other linear RNN architectures, such as Mamba, xLSTM (mLSTM), DeltaNet, and many more in a unified framework. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: - Extensive empirical validation of the models. - Extensive discussion of related work. - Their proposed model architectures show competitive performance. Weaknesses: - The update formulas of their new models (including the dimensions) could be stated explicitly. Other Comments Or Suggestions: - It would improve understandability if the final instantiation update rules for the Linear and the MLP variant would be stated somewhere in the paper (including the layernorm and the residual connection). - You choose the learning rate parameter learnable, so is the learning rate parameter a scalar? Did you think of different parametrizations? - If one views the learning rate as inter chunk forget gate, can we then explain the improved performance with smaller b in Figure 9 due to a more finegrained (forget-)gating? Questions For Authors: 1) L. 156: Why is the naive layer already efficient in FLOPs? FLOPs not mentioned so far. Do the authors mean the non-quadratic scaling in sequence length compared to transformers? 2) L. 286: What does it mean „all experiments for the TTT layers are performed in one training run“? Does that mean that there is one model trained with 32k context and then evaluated at different context lengths? If so which FLOP budget is then plotted on the x-Axis? The training FLOPs? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the impact of our framework. We also thank the reviewer for the concrete suggestions and questions, which we address below. ***Explicit update rule for the final instantiations*** Sorry we did not include these formulas in the main text due to space constraints. We chose to defer them to Appendix A (page 14 and 15), which includes a derivation and pseudocode of the update rules in dual form. We will expand Subsection 2.5 in the final version to include at least some of these details. ***Learning rate parameter*** The learning rate parameter \theta_{lr} is a vector of the same dimension as the embeddings x_t. We have tried making it a scalar and found performance to be slightly worse. We have not tried making it a matrix since that would significantly increase our number of parameters. ***More fine-grained gating*** We agree with the interpretation that \eta(x) is a gate (more precisely, an input gate). However, we believe that the trend in Figure 9 is not a consequence of \eta(x_t) being input dependent, since we have produced Figure 9 with a fixed \eta and the trend is the same. Intuitively, \eta(x_t) is used for every x_t regardless of the inner-loop mini-batch size b, so larger b will not result in more fine-grained gating. ***Answering the numbered questions*** 1 - Yes, sorry for the confusion. Perhaps a better phrase would be “efficient in computational complexity.” 2 - No, it does not mean what the reviewer has guessed. We are sorry for the confusion. Sometimes researchers would have multiple runs of the same experiment but with different random seeds to overcome instability, since some runs would get unlucky. We meant to say that we did not have multiple runs with different random seeds. We will delete this sentence in the final version to avoid possible confusion. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. Great work!
Summary: The paper proposes the Test-Time-Training (TTT) layer, with the goal to overcome the limitations on the expressive power of modern RNNs. The idea consists in linking the hidden state of an RNN to the parameters of a layer, so that input-driven updates on the hidden state translate into updates of model parameters: by drawing a connection between these updates and gradient descent, the authors effectively manage to setup a training procedure for (a subset of) parameters in the model, which unfolds at test-time. As proven by the authors, the proposed TTT formulation encompasses and generalises the Attention mechanism. Hardware considerations are employed to speedup the application of the TTT layer, resulting in wall-clock times comparable (or even outperforming) Mamba’s. A detailed analysis of the performance of the layer is reported, including numerous ablations, and scaling behaviour of perplexity of the model when varying context length and size. Claims And Evidence: For the most part, the evidence proposed is complete enough to allow for a proper assessment of the robustness of the method, and convincing in supporting its validity. The only claim I would reframe refers to the applicability of TTT to “any neural network as inner model”. While I believe this to be true in principle (as indeed the framework introduced is general enough), in my opinion it fails to address practical considerations which could actually hinder this, and I think it would be fair to better discuss and highlight this limitation in the paper. See also Q1. This is particularly relevant as the generality of the TTT framework is brought up multiple times as one of the main contribution of the paper, setting it aside from previous work (see eg end of Sec4.1 and 4.2). Methods And Evaluation Criteria: The main revolve around comparing perplexity of the proposed model against Mamba and Transformer baselines, after training on the Pile dataset on a fixed FLOPs budget. The sizes of the models and baseline compared are kept similar. The evaluation is in my opinion fair and meaningful. The only improvement I would recommend consists in reporting performances for downstream tasks as well (using, eg, eval_harness framework), to further confirm that perplexity gain indeed do transfer. Theoretical Claims: The main theoretical claims aim at drawing correspondences between the proposed method and linear / softmax attention, when specific choices on the TTT loss function and inner model are made. They also show the equivalence of primal and dual formulation of their method. The proofs mostly reduce to algebraic manipulations: I’ve checked their derivations and they appear correct. Experimental Designs Or Analyses: To my knowledge, the authors haven’t made their code available, so I can’t check it directly. Regarding the experiment setup, it is quite standard, so I don’t have any particular issue to highlight (see also Methods and Evaluation Criteria, and Q3). Supplementary Material: Yes, everything. Relation To Broader Scientific Literature: The project expands upon two main branches of work: modern RNNs, and Test-Time-Adaptation. Regarding modern RNNs, for its mini-batching procedure the paper reuses a similar chunk-splitting concept as in GLA (Yang et al, 23); however, in GLA this splitting is mostly a reorganisation of the operations to improve hardware efficiency, while in TTT it’s used to introduce an actual sequential update. Most important are the similarities to DeltaNet (Yang et al, 24): indeed, for specific choices of its inner model and loss function, TTT is exactly equivalent to DeltaNet. The authors however expand on this framework (with some caveats, see Q1). Regarding Test-Time-Adaptation, the method directly inherits the core idea of training a net during inference. In either case, in my opinion the discussion in Sec4 covers these connections reasonably well and fairly. Essential References Not Discussed: The paper seems to reference the most relevant literature already. Other Strengths And Weaknesses: The paper is clearly written for the most part, with its core method being described in a concise but effective manner. The idea is interesting and original, although it does build upon other work on modern RNNs and Test-Time Learning. The validity of the method is properly corroborated by the results provided, without over-claiming. Other Comments Or Suggestions: Please change the running title to something more meaningful than “Submission and Formatting Instructions for ICML 2025” Questions For Authors: Main Questions
 1. *Flexibility wrt inner model / loss function* - 
While it’s true your model formulation is flexible enough to accommodate for generic inner model and loss functions in theory, in practice wall-clock time remains a concern. As your layer aims to replace Attention or the Mamba mixer, it is reasonable to expect its application to require a comparable computational time. I really appreciated in this sense the comparison in Fig6, but the results therein already highlight how much just introducing the simplest nonlinearity (an MLP) impacts wall-clock time, due to the added complexity in both (inner) forward and backward computations. More complicated loss functions, or layer whose gradient is not readily computable, or that involve additional matrix-matrix-multiplications, are all factors which will further exacerbate this. If you agree with this, it should be properly reported as a limitation of the method; if not, please elaborate. Minor 2. *On using a different loss* - Given the improvement that mini-batching provides to Linear Attention (ie, TTT-linear with $b=T$, as per Thm1), I was looking forward to seeing a similar application to self-attention, (leveraging Thm2). Also in light of my comment in Q1, I understand why you decided to stick with the simpler inner-loss definition, but it would be interesting to try (especially as I agree with your remark that the choice of inner loss would likely impact the performance of the method), and was wondering whether you had already started experimenting on it. 3. *Fig6* - Is Fig6 reporting average time per token at *training* or *inference*? The reason I’m asking this, is because when processing input data at training, I was expecting to see a (slight) increase of wall-clock time of TTT with context length. This is in light of the fact that, while computing gradients updates *within* a mini-batch is parallelisable (as you show and take advantage of), the TTT gradient updates *along* mini-batches is a sequential procedure (as you need to compute the various updated $W_{0}, W_{b}, W_{2b},…$ in order). Of course this doesn’t hold for generation, since no form of sequence length parallelisation is applied anyway. If Fig6 actually reports average inference time at generation, as I suspect, could you correct its title and also include average time for a forward pass during training (just to get an idea of how much more/less time consuming it is to train your model vs Mamba or Transformer). If not, what am I missing? 4. *Fig7* - 
Why would $l(W_0,x_t)$ increase over $t$ in Fig7? What causes the rise in $||x_t||$? Which Positional Encoding do you use? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful questions, which we answer below. ***1 - Flexibility w.r.t. inner model*** We agree. We will add the discussion below to the final version if the paper is accepted (ICML does not allow submitting a revision): In principle, any differentiable model and loss function can be instantiated as the inner-loop model and loss in our framework, but the practical effectiveness of such instantiations can be limited by their requirements in wall-clock time. While TTT-MLP is an effective instantiation in FLOPs, the additional complexity of the MLP structure makes the increase in wall-clock time much larger than the increase in FLOPs, as shown in Figure 6. It remains to be seen whether our framework can produce instantiations that overcome this limitation. ***2 - On using a different loss*** Yes, a different inner loss among the most important things to try next. For the past few months, we have been experimenting with using next-token prediction as the inner loss and a Transformer as the inner model. Our initial results so far have been quite promising. ***3 - Figure 6*** Figure 6 is for training (forward and backward). Timing for inference is in Figure 13 in the Appendix, which has a separate plot for prefill (forward) and decode (generation). We appreciate that the reviewer is clear about the difference between training, prefill and decode. The reviewer said: *"I was expecting to see a (slight) increase of wall-clock time of TTT with context length."* A linear-complexity method that is completely parallelizable across the entire sequence should see a decrease in latency as context length increases, while one that is completely sequential should have constant latency. This is because latency = processing time / tokens processed. As tokens processed increases, the processing time for a parallelizable method increases at a much slower rate, while that for a sequential method increases at the same rate. In Figure 6, none of the methods exhibit significant decrease in latency, showing that given the compute resources in Figure 6, they are all far from parallelizable. ***4 - Figure 7*** We appreciate the reviewer’s attention to details. We use RoPE, but we do not think the increase in l(x_t; W_0) has to do with the position embedding. As the reviewer already knows, the increase in l(x_t; W_0) is a consequence of the increase in \|x_t\|, whose reason is unclear to us. We have measured \|x_t\| in the Transformer baseline and observed a similar increase, so we know that it is not caused by using TTT layers. We will also change the running title, thanks for catching that. Regarding reproducibility, we had to delete the link to our online code repo in the manuscript for anonymity. Many people have been able to reproduce our results using the code online. We will add a link to the final version if the paper is accepted.
Summary: This paper introduces Test-Time Training (TTT) layers, a 'clever' way to handle long sequences without the heavy cost of Transformers. The key idea is to treat the hidden state as a learning model that updates itself during inference, allowing it to capture complex patterns over long contexts. The authors propose two versions, TTT-Linear and TTT-MLP, which outperform existing models like Mamba but also runs faster on modern hardware. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: The experimental design is limited but sufficient for a proof of concept. I think they could have included another task beside language modeling, e.g., time series. Supplementary Material: I checked briefly the implementation details Relation To Broader Scientific Literature: This paper builds on prior work in sequence modeling,(especially Schlag et al., 2021 and Yang et al. 2024), while addressing their limitations in long-context settings. By leveraging self-supervised learning and meta-learning techniques, the proposed Test-Time Training (TTT) framework introduces adaptive hidden states which might be the way to improve both efficiency and expressiveness of SSMs Essential References Not Discussed: I think (Irie, Kazuki, et al. "A modern self-referential weight matrix that learns to modify itself." ICML, 2022.), which also combines ideas of FWP, RNNs, meta-learning through recursive self-improvement, is extremely relevant here and should be discussed in this paper, with key differences highlighted. Other Strengths And Weaknesses: Paper is very well-written and structured Weaknesses: - The main weakness of this work is the limited experimental setting and comparisons. Including results with relevant baselines, e.g., DeltaNet and Mamba2 would make the claims stronger and help us see how good these models are compared to these baselines. - The authors don't share any code for reproducing their results. So hard to judge the reproducibility. Other Comments Or Suggestions: Nan Questions For Authors: 1- Why not Including competitive results for Mamba2 and DeltaNet? 2- Can you comment on the similarities and differences compared to (Irie, Kazuki, et al. "A modern self-referential weight matrix that learns to modify itself." ICML, 2022.)? 3- Why did use mamba as backbone for TTTLinear and TTT-MLP? why not recent SSM models, such as mamba2? How does the model perform with such backbones? I understand that (mamba-->mamba2) is an orthogonal improvement to the contribution proposed here but would it be better to include the results with the top performant backbones to assess the full potential of TTT? 4- the claim in first paragraph in Section 4.1. '' The hidden state in these models is a vector, similar to in LSTMs. For TTT-Linear or TTT-MLP, the hidden state is a matrix or two matrices, therefore larger'' is not valid for all SSMs. In fact, recent SSMs, such as (FACTS: A Factored State-Space Framework For World Modelling) the hidden state is already a matrix. 5- When the hidden state of the backbone is already a Matrix (e.g., FACTS), does using TTT still improve the expressiveness of the model or this gain is reduced? would it affect the computational aspects of TTT? 6- Is it possible to share code to reproduce some parts of your results (e.g., on the Pile) otherwise, it is hard to assess the reproducibility of the approach's results. 7- One of the major issues of SSMs models and Mamba in particular (https://arxiv.org/pdf/2501.00658) is the over-smoothing and sensitivity to noise, especially at the end of the sequence (See table 1 in https://arxiv.org/pdf/2501.00658). I wonder how does test-time training affect this behavior? Can the authors share their thoughts on it. Because if it is the case, this can mitigate one of the current major bottlenecks of SSMs. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our paper as a sufficient proof-of-concept. We also thank the reviewer for the insightful questions, which we answer below. ***1 - Comparison with Mamba 2*** Mamba 2 130M with 2k context trained with Chinchilla scaling law on the Pile has perplexity 10.77, which is significantly lower than our number of 11.09 reported in Table 1 of the manuscript. However, 11.09 was based on the older Mamba backbone. During the rebuttal period, we experimented with putting TTT-Linear into the Mamba 2 backbone. This model has perplexity 10.65, which is slightly better than Mamba 2. ***2 - Discussion on Irie et al. 2022*** We thank the reviewer for bringing up this very relevant work. We will add the discussion below to the final version if the paper is accepted (ICML does not allow submitting a revision): Irie et al. 2022 is similar to DeltaNet, except that the key, query and value vectors are also produced by the hidden state matrix. In the terminology of TTT, all outer-loop parameters are the initializations of inner-loop parameters. This approach naturally situates Irie et al. 2022 in the same setting as MAML (Finn et al. 2016), and their experiments are both on few-shot learning. ***3 - Results with Mamba 2 backbone*** Answer for your question 1 in this rebuttal switches to Mamba 2 backbone. As the reviewer expected, the improvement in backbone is indeed orthogonal to the improvement we had with TTT layers. Mamba 2 had not been released when we conducted most of the experiments in May, 2024. ***4 - Incorrect claim about related work*** Indeed, we have missed some of the recent developments in SSMs, making our claim incorrect. We appreciate the reviewer for carefully reading our text and pointing out this error. We will make sure to fix this in the final version if the paper is accepted. ***5 - Improvements on top of linear hidden states*** The hidden states of linear attention and Mamba 2 are already matrices. Compared to linear attention, TTT-Linear brought significant improvements because of the added nonlinear operations. Compared to Mamba 2, TTT-Linear still brings a small improvement. ***6 - Sharing code for reproducibility*** Sorry we forgot to upload a zip file for ICML submission. We had to delete the link to our online code repo in the manuscript for anonymity. Many people have been able to reproduce our results using the code online. We will add a link to the final version if the paper is accepted. ***7 - Tradeoff between sensitivity to noise vs. over-smoothing*** We thank the reviewer for pointing out this interesting paper. More expressive hidden states will improve both ends of this tradeoff. As discussed in the paper, the key that controls this tradeoff is recency bias: too much recency bias results in sensitivity to noise, but too little results in over-smoothing. RNN layers with expressive hidden states will be able to remember useful information (therefore less over-smoothing) without using aggressive recency bias (therefore less sensitivity to noise), simply because it has a larger capacity for information. Take self-attention as an example. It does not need any recency bias because it simply stores all key-value pairs in a cache. We believe that very expressive inner models for TTT can achieve similar effects.
Summary: This paper presents a novel approach to sequence modeling by enhancing the hidden states of RNNs through a general method called Test-Time Training (TTT). The key idea is to frame the online updating of hidden states as a self-supervised learning process, using the update loss $\ell: W_t = W_{t−1} − \eta \nabla \ell(W_{t−1}; x_t)$, where $W_t$ is the hidden states learnt at each step, reminiscent of fast weights in RNN literature, and $\nabla$ denotes gradients from the chosen objective function. This framework unifies several previous approaches, including naive RNNs, linear attention, and even self-attention, under a common paradigm. One main challenge of TTT is the difficulty of parallelization, as the proposed objective may introduce non-linearities and dependencies on previous steps. To address this, the authors propose a batch update approach, allowing $W_t$ to depend on the previous N steps, or in extreme cases, on $W_0$. Interestingly, the authors demonstrate that under certain conditions, TTT becomes equivalent to vanilla linear attention. Building on this foundation, the authors introduce two variants: TTT-linear and TTT-MLP, both incorporating layer normalization and mini-batch updating. The experimental results on approximately 1B parameter models show promising performance in both short and long-context scenarios, demonstrating the potential of TTT for sequence modeling tasks. Claims And Evidence: No. I really appreciate the TTT framework proposed in this paper. However, the experiments conducted are somewhat limited, making the validation of TTT less comprehensive and persuasive. I strongly recommend the authors include additional common benchmarks found in related works like Mamba or GLA, such as hellaswag, lambada_openai, and winogrande, beyond just measuring perplexity on text length. Furthermore, more detailed comparisons with other relevant models like DeltaNet (which is heavily discussed in the paper) and Mamaba2 would significantly strengthen the evaluation. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: I can see potential benefits in using TTT to develop more efficient memory management strategies for RNNs, guided by the principles of TTT. Essential References Not Discussed: Recent concurrent works should be included and discussed: 1. Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues 2. gated delta Networks: Improving Mamba2 with Delta Rule 3. Longhorn: State Space Models are Amortized Online Learners Other Strengths And Weaknesses: 1. The experimental section of this paper is insufficiently comprehensive, making the results less convincing. 2. The detailed implementations of TTT-MLP and TTT-Linear are not well-explained, which is quite different from the naive cases in Sec 2.5. Other Comments Or Suggestions: See below. Questions For Authors: I'm having trouble understanding how TTT-Linear and TTT-MLP work. Given that both methods include Layernorm, which might create obstacles for parallelism and make it difficult to accelerate using tensor cores. Could the authors explain how the dual forms introduced in Section 2.5 can be implemented for TTT-Linear and TTT-MLP? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing our framework’s potential to guide the development of RNNs with better memory. We also thank the reviewer for the concrete suggestions and questions, which we address below. ***Benchmark on downstream tasks beyond perplexity*** The focus of our paper is on long context. However, all the downstream tasks in Mamba (Gu and Dao, 2024), as the reviewer suggested, have sequence length <1000. In fact, most of them only need ~100 tokens. There also exist downstream tasks with long context, such as book summarization and solving software issues in large repositories. However, these tasks require post-training for instruction following capability, and pre-training frontier models with trillions of tokens. Unfortunately, the cost of this evaluation exceeds our budget. ***Comparison with Mamba 2*** Mamba 2 130M with 2k context trained with Chinchilla scaling law on the Pile has perplexity 10.77, which is significantly lower than our number of 11.09 reported in Table 1 of the manuscript. However, 11.09 was based on the older Mamba backbone. During the rebuttal period, we experimented with putting TTT-Linear into the Mamba 2 backbone. This model has perplexity 10.65, which is slightly better than Mamba 2. ***Concurrent work*** We appreciate these references highlighted by the reviewer. We will add the discussion below to the final version if the paper is accepted (ICML does not allow submitting a revision): Concurrent work such as [1] [2] [3] has further improved the update rule for RNNs with matrix hidden states. Given that TTT-Linear is already similar to DeltaNet and linear attention, it is also similar to these more recent variants. As discussed in the opening of the paper, our core contribution is a practical framework to use any neural network as the hidden state of an RNN. This contribution is still salient in comparison with concurrent work. ***Would LayerNorm create obstacles for parallelism? How to implement the dual form for TTT-Linear and TTT-MLP?*** Sorry we did not include these details in the main text due to space constraints. We chose to defer them to Appendix A (page 14 and 15), which includes a derivation and pseudocode of the dual form for neural networks of arbitrary depth with arbitrary nonlinear operations. We will expand Subsection 2.5 in the final version to include at least some of these details. Following the explanation in Appendix A, it should be clear that LayerNorm (or any other nonlinear operations with few FLOPs) would not create an obstacle for parallelizing the rest of the hidden state. While LayerNorm runs on CUDA Cores, compute-intensive operations, specifically matrix multiplications, will still run on Tensor Cores As an intuitive analogy, regular neural networks, such as Transformers, also have many nonlinear operations, but all of their linear operations can still be parallelized. Does the reviewer feel that we have adequately addressed their concerns? Please let us know if we can provide more information.
null
null
null
null
null
null
Automatic Differentiation of Optimization Algorithms with Time-Varying Updates
Accept (poster)
Summary: The paper studies the convergence of the gradient of algorithm iterates with respect to the hyperparameters in settings where optimization algorithms employ time-varying update rules, such as changing step sizes or momentum parameters. The authors provide convergence guarantees for the derivative of the iterates under more general assumptions on the update mapping. They adapt these results to proximal gradient descent with variable step sizes and its accelerated counterpart, FISTA, under partially smooth assumptions. Claims And Evidence: Please refer to "Other Strengths And Weaknesses" section. Methods And Evaluation Criteria: Please refer to "Other Strengths And Weaknesses" section. Theoretical Claims: Please refer to "Other Strengths And Weaknesses" section. Experimental Designs Or Analyses: Yes. Supplementary Material: No. Relation To Broader Scientific Literature: I'm not sure if there is a strong relation between this paper with the broader scientific literature. Please refer to "Other Strengths And Weaknesses" section. Essential References Not Discussed: It would be more understandable if the authors added more details on the results of Beck (1994) that they claim to improve upon. Other Strengths And Weaknesses: Strengths: 1. It provides convergence guarantees for the gradient of the algorithm iterates, extending its analysis in Beck (1994). 2. The assumption on the update mapping for convergence guarantees of the derivative of the iterates is more general. Weaknesses: 1. Too many assumptions. For example: (a) The key assumption, Assumption 2.2, requires the derivative of the update mapping $A_k$ to converge while $x^k$ is also approaching $x^*$. It is hard to understand how this condition can be satisfied. The sufficient condition, given as the equicontinuity of $\mathcal{A}_k$, is not well defined, and its meaning is unclear. (b) The assumption regarding the related interior is not supported by examples. Moreover, this assumption is difficult to satisfy. For instance, when $x \neq 0$, $\text{ri} (\partial \|x\|_1)$ is empty. Therefore, more successful examples are needed to justify this assumption. (c) The analysis would be more convincing if concrete examples were provided, particularly in the partly smooth setting. This is especially important in Theorem 3.4 (iii), where the function is required to be $C^3$ on $\mathcal{M}$. 2. Unclear motivation. This work briefly mentions in the introduction (lines 65--67) that the effective use of automatic differentiation (AD) relies on the convergence of the derivative of iterates. However, it would be more convincing if examples were provided to illustrate why the convergence of $Dx^k(u)$ is important. What do the authors mean by "effective use"? In bilevel optimization, for example, the convergence of $x^k(u)$ can be analyzed without considering the convergence of $Dx^k(u)$. Why, then, is it necessary to analyze the convergence of $Dx^k(u)$? 3. Difficult to read. Mathematical concepts are not well explained or supported by examples. The paper assumes familiarity with numerous optimization notations and concepts without providing definitions. For example, there is no definition of $\mathcal{L}(\mathcal{X},\mathcal{X})$, the relative interior mentioned in lines 237--238, or equicontinuity in line 166. Additionally, the meaning of "Lipschitz continuity in $x$ uniformly in $u$ and $k$" is unclear. \end{enumerate} Other Comments Or Suggestions: 1.Typo on line 264: if I understand correctly, it should state that the Hessian of $f$ is positive definite. 2.Typo on the 4th line of the first paragraph in Section 3.3: what is Algorithm 3.3? Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and suggestions. We will fix minor typos and incorporate the suggested changes to the next version. > It would be more understandable if the authors added more details on the results of Beck (1994) that they claim to improve upon. In the appendix, we dedicate the entire Section A.2 to the results of Beck (1994) by presenting them in a way which makes the comparison between the two works easier. We could not do that in the main text due to the page limit. > (a) The key assumption, Assumption 2.2, requires the derivative ... Please see the answer to "Difficult to read. Mathematical concepts are not well explained..." below. > (b) The assumption regarding the related interior is not supported by examples ... The relative interior of a non-empty convex set is always non-empty (Rockafellar, 1997; Theorem 6.2). When $C$ is a singleton, $\mathrm {ri} \ C = C$. For practical non-smooth functions, this assumption (Assumption 3.3) is satisfied almost everywhere by a solution (Vaiter et al., 2017; Definition 3 & Theorem 3). > (c) The analysis would be more convincing if concrete examples were provided ... Please see our response "Assumptions 3.1 - 3.3 are quite less restrictive ..." to Reviewer QxMK for the question "I did not check the proof ..." > Unclear motivation. This work briefly mentions in the introduction ... When we apply AD on the algorithm for solving the lower level problem, we really are replacing the lower level problem with $$ x^{(K)} (u) := \mathcal B (x^{(0)}, u; K) \,, $$ where, for a given $u$, $\mathcal B (x^{(0)}, u; K)$ performs $K$ iterations of the algorithm defined by the update mapping sequence $\mathcal A_k$, starting with $x^{(0)}$. To show that, by doing this, we are actually solving the original bilevel optimization (which does not depend on the algorithm being used to solve the lower level problem), the convergence of $x^{(k)} (u)$ to the solution $\psi (u)$ (that is, the forward pass computes the correct solution) won't suffice. We also require the guarantee that $D x^{(k)} (u)$ converges to the derivative of the solution mapping $D \psi (u)$ (that is, the backward pass computes the correct derivative) . > Difficult to read. Mathematical concepts are not ... Thank you for the suggestion. We will update our preliminaries and the notation section for a better understanding of our paper. The notion of Equicontinuity for a collection of functions (Rudin 1987; Definition 11.27) means that such a collection behaves "uniformly" in terms of continuity. For example, when $f$ is $C^2$-smooth and $g=0$ in Equation $\mathcal P$ in the paper, the update mapping of gradient descent with variable step size $\mathcal A_k (x, u) := x - \alpha_k \nabla_{x} f (x, u)$ has an equicontinuous derivative $D \mathcal A_k$. $\mathcal L (\mathcal X, \mathcal X)$ is the space of all linear operators from $\mathcal X$ to $\mathcal X$. Relative interior of a convex set is defined in, for example, Rockafellar (1997; p. 44). "Lipschitz continuity in $x$ uniformly in $u$ and $k$" means that the given collection of maps are Lipschitz continuous in $x$ where the Lipschitz constant is fixed for all $u$ and $k$. Formally, for all $u \in \Omega$, and $k \in \mathbb N$, there exists $L \in \mathbb R$, such that, for all $x, y \in \mathcal X$, $$ \lVert D \mathcal A_k (x, u) - D \mathcal A_k (y, u) \rVert \leq L \lVert x - y \rVert \,. $$ For the gradient descent example above, $D \mathcal A_k$ is Lipschitz continuous in $x$ uniformly in $u$ and $k$, when $\nabla_{x} f (\cdot, u)$ is $L$-Lipschitz continuous for all $u$. > Typo on line 264 ... Yes, that is a typo. It should be the Hessian of $f$ and not the gradient. Thank you for pointing it out. > Typo on the 4th line of the first paragraph ..." It should be Algorithm 1. Thank you for pointing it out. **References** Rudin, W., 1987. Real and complex analysis. Rockafellar, R.T., 1997. Convex analysis. Vaiter, S., Deledalle, C., Fadili, J., Peyré, G. and Dossal, C., 2017. The degrees of freedom of partly smooth regularizers. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' rebuttal and will keep my current score.
Summary: Automated differentiation (AD) is a little studied workhorse behind all major deep-learning frameworks (PyTorch, TensorFlow, JAX). The authors show linear convergence for forward-mode AD for algorithms, where the iteration's parameters change over time. This is a very neat theoretical result. Claims And Evidence: The theory is solid and its discussion is ok, although the authors could perhaps explain the relationship (overlap) with https://www.repository.cam.ac.uk/items/271dad6b-c165-434e-a4ae-74cac4129604 and https://arxiv.org/abs/2208.03107 better. I would prefer to have a theorem or two less in the main body of the text, while having the material presented clearly original. Methods And Evaluation Criteria: The empirical results are rudimentary, but illustrative. Theoretical Claims: The introduction could spell out that they consider Clarke subdifferential, rather than to say "We use the terminology of Rockafellar and Wets"? "Partly smooth" functions are a bit neglected in the literateure. It would be great to relate the "partly smooth" functions to the better known semismooth functions and functions definable in o-minimal structures. (Partly smooth functions are a subclass of semismooth functions, which are a subclass of functions definable in o-minimal structures. The proofs are mostly ok. In Lemma C.1, intermediate equation should be: $\Big\langle b(i), w - y(i)\Big\rangle &= - \|a^{(i)}\|\Big(\left\langle a^{(i)}, y(i) - \operatorname{proj}_{C_{k_i}}(y(i)) \right\rangle + \left\langle a^{(i)}, \operatorname{proj}_{C_{k_i}}(y(i)) - w \right\rangle\Big)$ $=-\frac{1}{\|a^{(i)}\|}.\|a^{(i)}\|^2+\frac{1}{\|a^{(i)}\|}\left\langle a(i), w-\operatorname{proj}_{C_{k_i}}(y(i)) \right\rangle\Big)\leq 0$ (where the second term is negative because of projection property). The inequality $\le-\frac{1}{2}\|a^{(i)}\|$ is a mistake in the paper. In Theorem 2.7, the proof relies on Assumption 2.3. It might be beneficial to explicitly show how the linear convergence assumption is used in bounding $\|Dx^k(u^*)-X^*\|$. Also, it is unclear how the dependency on $C_1$ and $C_2$ is computed; explicit bounds should be provided. In Theorem 3.7, the assumption that $D_xA_{\alpha_k}(x^*,u^*)$ remains bounded over iterations is reasonable, but should be stated explicitly. Also, the result extends previous work naturally, but the proof would benefit from a more explicit rate bound rather than an asymptotic argument. Experimental Designs Or Analyses: The empirical results are rudimentary, but illustrative. Supplementary Material: There does not seem to be any supplementary material. Relation To Broader Scientific Literature: The discussion of Implicit differentiation could be less dismissive, perhaps? For deep learning, it is rather elegant. Essential References Not Discussed: None. Other Strengths And Weaknesses: There are some minor typos, e.g. lines 142-143: one of "without" and "removed" should be dropped? Other Comments Or Suggestions: In Algorithm 1 (Proximal Gradient with Extrapolation), conditions under which the step sizes ensure contraction are not apparent. If step sizes $\alpha_k$ oscillate too much, the contraction property may not hold. In Theorem 3.6, equation (10) states that the inverse is assumed to exist, but it would be good to clarify the conditions under which this holds (i.e., ensuring positive definiteness). The use of projection operators \(\Pi(x)\) and \(\Pi^{\perp}(x)\) from manifold-based optimization does not state whether these projections are orthogonal with respect to the Euclidean norm or not. In Assumption 3.3, the relative interior may be empty. Is there a condition under which the relative interior is non-empty? Also, checking whether $0\in\partial_xF(x^*,u^*)$ holds can be difficult, especially in high dimensional problems. Would the method generalize to multi-level optimization (https://arxiv.org/abs/2207.02849, https://proceedings.neurips.cc/paper/2021/hash/3de568f8597b94bda53149c7d7f5958c-Abstract.html, https://doi.org/10.1007/s00186-024-00852-5)? Rather than the experiments presented, I would appreciate some discussion of the AD implemented in PyTorch etc. Questions For Authors: Rather than the experiments presented, I would appreciate some discussion of the AD implemented in PyTorch etc. Would this be possible, please? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and suggestions. We will fix minor typos and incorporate the suggested changes to the next version. > ... the authors could ... Riis (2020) and Mehmood & Ochs (2022) better. The key difference presented in Section 1.2, Paragraph 4 (last on the page) is that "the sequences $\alpha_k$ and $\beta_k$ [must] converge" for their analysis and that "[they] do not explain the *linear* convergence of AD of APG". These are the two main points that we address in this paper. We will make it more intelligible in the updated version. > I would prefer to have ... material presented clearly original. We recall many results from other papers (Lemmas 3.2, and 3.9, Theorems 3.1, 3.6, and 3.11) and split our main results in Section~3 into intermediate results (Theorems 3.4, 3.7, 3.10, and 3.12), for a better understanding. We can change Theorems 3.4, 3.6, 3.10, and 3.11 into lemmas. > The introduction could spell ... Rockafellar and Wets"? We will update our preliminaries and the notation section to make the paper more readable. Thank you for the suggestion. > ... It would be great to relate ... definable in o-minimal structures... Can you please share a reference which relates partly smooth functions to semismooth functions? > ... In Lemma C.1, intermediate ... mistake in the paper. That is a typo. We will fix that. Thank you so much. > In Theorem 2.7, the proof ... bounding $\lVert D x^{(k)} (u^*) - X^*\rVert$. Initially, we were going to put Theorem A.2 in the main part of the paper. That would have allowed us to motivate the proof of Theorems 2.5 and 2.7 in the main part. However, the space constraints did not allow us to go through with it. > In Theorem 3.7, ... $D_{x} A_{\alpha_k}(x^*, u^*)$ remains bounded ... stated explicitly. You are right. Under the conditions of Lemma 3.2, this follows naturally. However, without imposing those assumptions, we need to explicitly assume that $x^{(k)}$ converges linearly to $x^*$ and $D_{x} A_{\alpha_k}(x^*, u^*)$ eventually has a norm less than 1. > Explicit rate bounds for Theorems 2.7 and 3.7 Thanks a lot for the suggestion. We will provide the explicit error bounds with $C_1$ and $C_2$ clearly defined. > The discussion of Implicit differentiation could be ... Indeed, there is no doubt about the benefits and popularity of implicit differentiation in, for instance, the training of equilibrium models. However, because we focus on automatic differentiation, we chose to list the reasons where AD shines more than ID. We will revise our discussion on implicit differentitation. > In Algorithm 1 (Proximal Gradient with Extrapolation), conditions under ... The assumptions on the choice of the step size and extrapolation parameter in Algorithm 1 are made explicit in our lemmas and theorems by recalling the work of Liang et al. (2014; 2017) or other classical results wherever necessary. Algorithm 1 in itself is an abstract algorithm and is hence named "Proximal Gradient with Extrapolation" which only mentions the three important steps in the algorithm. > In Theorem 3.6, equation (10) ... (i.e., ensuring positive definiteness). In Theorem 3.6, because $\rho (D_{x} \mathcal A_{\alpha} (x^*, u^*) \Pi (x^*)) < 1$, we obtain $\rho (D_{x} \mathcal A_{\alpha} (x, u) \Pi (x)) < 1$ for all $u$ near $u^*$ and $x = \psi (u)$ from Theorem 3.5(i) which implies the invertibility of $I - D_{x}\mathcal A_{\alpha} (x, u) \Pi (x)$ from Lemma A.1. > The use of projection operators ... the Euclidean norm or not. The orthogonality is indeed with respect to the Euclidean inner product. The Riemannian metric is inheritied from the natural embedding of $\mathcal M$ in $\mathcal X$. We will make it clear in the paper. > In Assumption 3.3, the relative interior ... in high dimensional problems. The relative interior of a non-empty convex set is always non-empty (Rockafellar, 1997; Theorem 6.2). We assume that the reviewer mean $0\in \mathrm {ri} \ \partial_{x} F (x^*, u^*)$. For many practical partly smooth functions, it is satisfied almost everywhere by a solution (Vaiter et al. 2017; Definition 3 & Theorem 3). > Would the method generalize to multi-level optimization? If needed, our work can be put into use in multi-level optimization because we mainly analyze the problem at the lowest level. Bilevel optimization is only emphasized as an *application* of our results. > Rather than the experiments ... this be possible, please? We are sorry we could not comprehend the comment well. Should we provide the implementation details of our experiments in PyTorch? **References** Liang et al., 2014. Local linear convergence of forward--backward under partial smoothness. Liang et al., 2017. Activity identification and local linear convergence of forward--backward-type methods. Vaiter et al., 2017. The degrees of freedom of partly smooth regularizers.
Summary: This paper studies automatic differentiation which is widely used and fundamental in bilevel optimization. The focus of this work is on the case where the algorithm may have changing parameters at each iteration, such as step-sizes. Under this setting and some assumptions, they analyzed the convergence of the automatic differentiation scheme. Claims And Evidence: Yes. Methods And Evaluation Criteria: This is a theoretical paper and there is no method proposed. Theoretical Claims: I did not check the proof, while I have some concerns regarding the assumptions 3.1 - 3.3, which seem quite strong. Can the authors list some functions that satisfy these assumptions? Some discussions should be provided below these assumptions. For example, if the function F is strongly convex and smooth, will these assumptions hold? Experimental Designs Or Analyses: Experiment settings are simple, but it is fine for a theoretical paper. Supplementary Material: No Relation To Broader Scientific Literature: No Essential References Not Discussed: Not find. Other Strengths And Weaknesses: This paper seems quite technical, and the results are not trivial. However, it is difficult for me to judge whether this paper made significant contributions. The main reason is the lack of discussions on Assumptions 3.1 - 3.3 (looks restrictive for me), which determines the applicability of their results. Other Comments Or Suggestions: More discussions on the assumptions. Questions For Authors: I can see that in page 2, the expression of $D x^{(k+1)}(u)$ includes three parts: $D_x A(x^{(k)}(u), u)$, $D x^{(k)}(u)$, and $D_u A(x^{(k)}(u), u)$. The convergence of $D x^{(k+1)}(u)$ requires all these three terms to converge. Which one is the most difficult part to prove the convergence? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and suggestions. We will incorporate the suggested changes to the next version. > I did not check the proof, while I have some concerns regarding the assumptions 3.1 - 3.3, which seem quite strong. Can the authors list some functions that satisfy these assumptions? Some discussions should be provided below these assumptions. For example, if the function F is strongly convex and smooth, will these assumptions hold? **...** it is difficult for me to judge whether this paper made significant contributions. The main reason is the lack of discussions on Assumptions 3.1 - 3.3 (looks restrictive for me), which determines the applicability of their results. **...** More discussions on the assumptions. Assumptions 3.1 - 3.3 are quite less restrictive when we compare them to the smooth setting ($g=0$) where $F = f$ is $C^2$-smooth and strongly convex. Many practical regularizers which induce some form of sparsity, satisfy Assumption 3.1. These include $\ell_1$ norm, $\ell_{2, 1}$ norm, and the nuclear norm etc. More examples can be found in, for instance, Vaiter et al. (2017; see Section 3). The obtained sparsity is such that the solution lies on a low dimensional manifold as compared to $\mathrm{dim} \ \mathcal X$ thus making it easier for Assumption 3.2 to be satisfied. Even though it is possible that this assumption may not satisfy, it is still way more general than the usual strongly convex, $C^2$-smooth, lower level objective assumption. Assumption 3.3 is satisfied almost everywhere by a solution for many practical regularizers (Vaiter et al. 2017; Definition 3 & Theorem 3). For linear convergence rates, in Theorems 3.7 and 3.12, we also assume local Lipschitz continuity of the second derivatives of $f$ and $g$ and $C^3$-smoothness of $\mathcal M$ which is also satisifed for the same problems listed above and more in Vaiter et al. (2017). In the udpated version, we will add these concrete examples for a better understanding of our assumptions. > I can see that in page 2, the expression of $D x^{(k+1)} (u)$ includes three parts: $D_x \mathcal A (x^{(k)} (u), u)$, $D x^{(k)} (u)$, and $D_u \mathcal A (x^{(k)} (u), u)$. The convergence of $D x^{(k+1)} (u)$ requires all these three terms to converge. Which one is the most difficult part to prove the convergence? $D x^{(k)} (u)$ and $D x^{(k+1)} (u)$ are elements of the same sequence $(D x^{(k)} (u))_{k\in\mathbb N}$, which is generated by Equation $\mathcal{DR}$ in a recursive manner whose convergence is desired. It is possible that even though the sequences $(D_{x} \mathcal{A} (x^{(k)} (u), u))_{k\in\mathbb{N}}$, and $(D_u \mathcal{A} (x^{(k)} (u), u))_{k\in\mathbb{N}}$ converge (which is straightforward when $\mathcal A$ is $C^1$), the sequence $(D x^{(k)} (u))_{k\in\mathbb N}$ does not converge. So in the end, it is $D x^{(k)} (u)$, whose convergence requires more effort to show.
Summary: When, for example, solving a bilevel optimization $\min_{u\in\mathcal{U}}l(\psi(u),u)$ by gradient descent, a derivative of a solution mapping $\psi(u)$, that is $D\psi(u)$, needs to be computed. While either the implicit differentiation (ID), using chain rule, or the automatic differentiation (AD) can be used to compute $D\psi(u)$, the latter may be preferred due to its blackbox implementation. This paper particularly focuses on analyzing the AD. Regardless of the differentiation technique, one needs to iteratively update as $x^{(k+1)}(u) = \mathcal{A}_k(x^{(k)}(u),u)$ to approximate $\psi(u)$. And, during this process, the (forward mode) AD additionally updates $Dx^{(k)}(u)$ in a certain way to approximate $D\psi(u)$. This paper's main contribution is to study the (forward mode) AD for time-varying iterative processes $\mathcal{A}_k$, such as gradient descent with line search, accelerated gradient method, and Quasi-Newton methods, in a unified way. (Beck, 1994) and other cited literatures have studied this setting, but they lack a (linear) convergence rate analysis, which is of main interest of this paper. Section 2 is a direct extension of (Beck, 1994) that assumed the convergence of $\mathcal{A}_k$. This paper proves derivative iterate convergence without assuming convergence of $\mathcal{A}_k$. Moreover, with additional Assumption 2.1(ii), this paper was able to show a linear convergence rate. (Q: Was there a convergence rate analysis for the AD of non-time-varying update? I was not able to figure that out in the paper.) Section 3 then applies the established result in Section 2 to (extrapolated) proximal gradient with variable step size for the class of partly smooth functions. The convergence of the derivate iterates was already shown in (Mehmood & Ochs, 2022) (which does not assume convergence of $\mathcal{A}_k$), so the rate analysis is new here. Claims And Evidence: All claims are theoretically proven. Methods And Evaluation Criteria: . Theoretical Claims: I was not able to check correctness of proofs. Experimental Designs Or Analyses: This paper presents experiments on three classical machine learning problems. The numerical results align with the theory and seem sufficient, especially given the paper's theoretical focus. Supplementary Material: I have only gone through some parts. Relation To Broader Scientific Literature: . Essential References Not Discussed: . Other Strengths And Weaknesses: . Other Comments Or Suggestions: - Unlike the introduction, the abstract lacks a clear motivation (that should built upon existing limitations), and a complete picture of the paper. Regarding the latter, presenting the necessity of applying automatic differentiation to a (time-varying) iterative process, such as in bilevel optimization, could be helpful. - Define $\mathcal{L}(\mathcal{X},\mathcal{X})$ - Contributions: Although the assumptions of (Beck, 1994) are given in the appendix and one can compare them with new assumptions, I suggest adding comments after the assumptions so that one can clearly identify contributions (i) and (ii) from (Beck, 1994) after they are introduced. - Contibution (ii): I think Assumptions 2.2 and 2.3 should be added here. Questions For Authors: - p3 left line 125: Does this mean that the analysis in (Riis, 2020) and (Mehmood & Ochs, 2022) that showed the convergence of the AD of APG cannot be applied to the AD of PGD with non-converging variable step size? If yes, then does Section 2 (especially Theorem 2.5) generalizes the result in (Riis, 2020) and (Mehmood & Ochs, 2022), so that you can provide guarantee for the AD of PGD with non-converging variable step size? I suggest explaining what is new in Section 2, in comparison to (Riis, 2020) and (Mehmood & Ochs, 2022) and particularly in terms of contribution (i). - p4 right line 215: Could you further explain why, without such additional condition, the AD will not have a linear rate? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and suggestions. We will fix minor typos and incorporate the suggested changes to the next version. > Was there a convergence rate analysis for the AD of non-time-varying update? I was not able to figure that out in the paper. We highlight them in Section 1.1. We mention these works right before introducing the time-varying algorithms in Section 1.2. > Unlike the introduction, the abstract lacks a clear motivation (that should built upon existing limitations), and a complete picture of the paper. Regarding the latter, presenting the necessity of applying automatic differentiation to a (time-varying) iterative process, such as in bilevel optimization, could be helpful. We will add the application of our work in Bilevel optimization in the updated version of the abstract. Thank you for pointing it out. > "Define $\mathcal L (\mathcal X, \mathcal X)$." $\mathcal L (\mathcal X, \mathcal X)$ is the space of all linear operators from $\mathcal X$ to $\mathcal X$. We will update our preliminaries and the notation section to make the paper more readable. Thank you for the suggestion. > "Contributions: Although the assumptions of Beck(1994) are given in the appendix and one can compare them with new assumptions, I suggest adding comments after the assumptions so that one can clearly identify contributions (i) and (ii) from Beck(1994) after they are introduced." That will definitely make our contributions more explicit. Thank you for the recommendation. We will do it in the updated version. > Contibution (ii): I think Assumptions 2.2 and 2.3 should be added here. That's a typo actually. Thank you for spotting that. > p3 left line 125: Does this mean that the analysis in (Riis, 2020) and (Mehmood & Ochs, 2022) that showed the convergence of the AD of APG cannot be applied to the AD of PGD with non-converging variable step size? If yes, then does Section 2 (especially Theorem 2.5) generalizes the result in (Riis, 2020) and (Mehmood & Ochs, 2022), so that you can provide guarantee for the AD of PGD with non-converging variable step size? I suggest explaining what is new in Section 2, in comparison to (Riis, 2020) and (Mehmood & Ochs, 2022) and particularly in terms of contribution (i). Riis (2020) and Mehmood & Ochs (2022) do not consider a general time-varying update rule. They consider APG or FISTA with converging step size and extrapolation parameter sequences by employing the analysis of Beck (1994). Their results can not be easily applied to PGD with a general non-converging step size sequence. Therefore, we provide an update on the results of Beck (1994) in Section 2 using Assumption 2.1(ii) and Assumption 2.2 and use it to provide stronger results than Riis (2020) and Mehmood & Ochs (2022) in Section 3. > p4 right line 215: Could you further explain why, without such additional condition, the AD will not have a linear rate? Actually, the correct statement should be that without such additional condition, it will be very challenging to *prove* that AD will have a linear rate.
null
null
null
null
null
null
How Contaminated Is Your Benchmark? Measuring Dataset Leakage in Large Language Models with Kernel Divergence
Accept (poster)
Summary: The paper proposes the Kernel Divergence Score to estimate data contamination in large language models. The method makes use of a kernel with layer embeddings to estimate the similarities between samples before and after finetuning. If these embeddings remain similar, the data is likely contaminated. The authors perform experiments on several benchmarks showcasing the effectiveness of their approach. Claims And Evidence: Overall, the claims made by the authors are supported by convincing evidence. However, several major concerns (see experimental designs) limit the conclusions that can be drawn from the numbers shown. Furthermore, I am particularly concerned with the very small gain compared to the Perplexity method. This baseline is obviously flawed, as lower perplexity can have any number of causes. Yet, it performs almost perfect on the given benchmark and is only outperformed by 2% by the new method. This indicates very marginal gains over a very basic baseline and therefore does not warrant the additional complexity of the method. Furthermore, the high deviations reported in Table 9 (>16%) indicate that these differences are likely not even significant. Methods And Evaluation Criteria: See experimental designs. Theoretical Claims: I have one concern with the current requirements that are listed: Why do the authors use two requirements? Consistency follows from monotonicity (if one replaces the "if" with an "if and only if", which would be necessary for a good definition of monotonicity). Thus, Requirement 2 is redundant and also plays a much smaller role in the evaluation. Experimental Designs Or Analyses: - I am concerned with the main metric used for evaluation (spearman/peason) correlation. In practice, one cannot perform such an experiment: the authors should be provide a clear cutoff at which a benchmark should be considered contaminated, or give an indication which value of their score function corresponds to what level of contamination. Without such a correspondence between the scoring function and contamination, the test has no practical use and cannot be considered a contribution. Thus, I ask the authors to investigate the following aspects: - Do similar values of S for the different benchmarks correspond to the same level of contamination? If not, then no such correspondence exist. - Do similar values of S for different models correspond to the same level of contamination? If not, then no such correspondence exist. - I am furthermore concerned that the test does not measure contamination, but difficulty. Using a similar argument as the authors, one could argue that training on more difficult samples the model is less used to, would cause more deviation on the sample. The fact that the perplexity metric (another metric for which this can be argued) is so close in performance to KDS indicates as well that this might be a problem. Therefore, I urge the authors to: - On the benchmarks presented in appendix E, show the correlation between benchmark difficulty and model score for each model separately. I suspect (but am not sure) the correlation will be significantly above 0. - The authors should clearly argue why this is not the case. If one wants to use this test in practical scenarios to find contamination and accuse model providers that their model is contaminated, there should be no argument what the test is measuring. - I am concerned that the shown plots indicate a discrepancy between seen and unseen samples. Specifically, on the left-hand side of each plot, one would expect the square corresponding to seen samples to be as bright as unseen samples: brightness indicates similarity between the samples. It seems here that the unseen samples are very similar while the seen samples are not. The authors should clarify if this is a mistaken interpretation on my side (if so, explain), and if not, see how this can affect their scores as it greatly decreases the difficulty for their method to detect unseen samples. - I am concerned with the negative correlations shown for some baselines. The authors only briefly comment on this, but perfect negative correlation (see Zlib) for a benchmark indicates that the method would work perfect if you were to swap the sign for this benchmark. This either indicates (1) the authors have found an issue no prior work has ever been able to find (2) the implementation is wrong, or (3) the metric used is not a good metric for performance. The authors are strongly encouraged to provide evidence that excludes the second and third option, since the first one is quite a strong claim. - The authors should compare against baselines in Table 5. Due to the cited papers (Duan et al, Das et al, ...), the numbers presented in Table 1 are a bit meaningless due to the possibility of a temporal split. - The comparison with Oren et al. is not accurate. SRCT can only be used on the canonical ordering of samples. Since the authors subsample to report the numbers, the canonical ordering disappears. The authors should ensure that the subsampling happens in such a way that consecutive samples are included for the comparison against Oren et al. Supplementary Material: I have reviewed all supplementary material. I have the following comments: - No code is provided. The authors should provide code with clear instructions for reproduction purposes. Relation To Broader Scientific Literature: The authors do a good job of relating the work to existing work in data contamination. No essential work is missing, and the baselines compared against are accurate. The first paragraph in Section 6 is necessary and appreciated (albeit not entirely complete, see Experimental Designs). Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper is very well-written and easy to follow. ### update after rebuttal I have changed my view on this paper after the rebuttal. Most of my concerns were sufficiently addressed, although some concerns still remain (practical applicability of the method being the main bottleneck). Therefore, I have raised my score to a recommended accept, but would not recommend the paper for a spotlight paper. Other Comments Or Suggestions: I am currently leaning towards rejection. If the authors are able to address my concerns, I am willing to increase my score. However, if one of the following aspects is not clarified or improved during the rebuttal, I will further reduce my score: - Very marginal gains compared to the perplexity method. - Correspondence between S and level of contamination. - Is difficulty correlated with KDS? Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive feedback. Below, we address the concerns in detail. --- _A1. Baseline comparison in Table 5 (Pile dataset)_ We provide the comparison below. KDS achieves the highest average correlation. | Spearman Corr. | Wikipedia | PhilPapers | Enron | HackeerNews | Pile_CC | StackExchange | **Average** | |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | Zlib | 0.861 | **1.000** | **1.000** | -0.956 | -0.782 | 0.990 | 0.352 | | Zlib + FSD | **1.000** | 0.991 | 0.999 | 0.323 | 0.894 | 0.999 | 0.868 | | Perplexity | -0.886 | 0.999 | 0.999 | -0.999 | -0.251 | 0.999 | 0.144 | | Perplexity + FSD | **1.000** | 0.990 | 0.999 | 0.118 | **0.908** | **1.000** | 0.836 | | Min-K% | -0.645 | 0.996 | **1.000** | -0.955 | 0.690 | 0.999 | 0.348 | | Min-K% + FSD | 0.997 | 0.952 | 0.997 | 0.421 | **0.908** | **1.000** | 0.879 | | Min-K%++ | -0.482 | 0.960 | -0.842 | 0.561 | 0.514 | 0.697 | 0.235 | | Min-K%++ + FSD | -0.536 | 0.994 | -0.770 | 0.705 | -0.358 | 0.210 | 0.041 | | KDS (Ours) | 0.891 | 0.982 | **1.000** | **0.897** | 0.895 | **1.000** | **0.944** | _A2. Significance over Perplexity_ In the A1 table above, **KDS outperforms Perplexity on the Pile dataset, with an average correlation of 0.944 compared to 0.144**. _A3. Negative correlations_ We resonate with the reviewer’s concern. To address the potential causes, **we reviewed our implementation and found no errors**. Since our code directly builds on established prior work, an implementation error is unlikely (code will be released). --- _A4. Main metrics and Practical utility_ (1) Main metrics We agree that absolute contamination estimation and exact correspondence would be valuable. However, a **necessary step toward this goal** is to first validate the correctness of any scoring functions. The use of Spearman/Pearson correlation in our controlled experiments allows us to rigorously assess whether KDS (and other approaches) produces consistent and monotonic metrics, necessary conditions for any meaningful scoring function. **This provides dev-purpose metric to ensure score behaves predictably and reliably in principle, which is essential for deploying it in practical settings**. To our knowledge, no prior work has investigated this practically important angle. Concerningly, we discovered existing scores do not satisfy these necessary conditions. In contrast, KDS consistently satisfies these conditions under varying benchmarks (including the challenging ones), providing stronger foundations for future method design. (2) Practical utility The lack of access to pre-training corpora makes ground-truth estimation fundamentally challenging. Moreover, pre-training data is often non-overlapping across models, further complicating any cross-model normalization or comparison. _To our knowledge, there is no existing work that has effectively estbalished correspondence between the scoring function and dataset contamination_. **This reflects the intrinsic difficulty of the problem, rather than a unique shortcoming of our approach**. That said, we would like to emphasize that **KDS is highly practical, which supports real-world applications such as safety auditing and benchmark selection**, where knowing which dataset is more contaminated is often more important. It enables dataset creators or auditors to prioritize and select the least contaminated benchmarks for evaluation purposes. We state the practical utility in **Lines 108-117**. --- _A5. Correlation between benchmark difficulty and KDS_ Thank you for this insightful question. During rebuttal, we computed the correlation of {Llama3.1-8b, Mistral-7b, Qwen2.5-7b}'s performance on {GSM, MPP, MPL, TFQA} with KDS. The correlation values are 0.2, 0.4, 0.2, respectively, which are all very low. This indicates that **benchmark difficulty is not what determines the scale of KDS.** --- _A6. Why do we need Requirement 2?_ While the two requirements could be combined, separating them into orthogonal objectives offers clearer guidance for future research by isolating distinct performance aspects. --- _A7. Clarification on Figure 2 (left)_ It is true that unseen samples are similar to each other in Figure 2 (left), and we would like to clarify that this is the similarity _before_ fine-tuning. However, **our method relies on the changes in the similarity or distance before and after fine-tuning, which is much more significant for unseen examples**. As shown in Figure 2 (middle, without gating) and Figure 2 (right, with gating), unseen samples exhibit significantly larger shifts in embedding geometry, whereas seen samples remain relatively stable. This differential behavior is the core signal that KDS captures. Ps. The performance of only using Figure 2 (left) is in Table 2 "w/o (2) Fine-tuning". --- _A8. Is comparison with SRCT accurate?_ **We preserved the canonical ordering of samples in SRCT** by not shuffling the dataset. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough reply. Their are a couple of concerns left that I would like to see resolved before updating my review. Please see all details below. - A1. Baseline comparison in Table 5 (Pile dataset): Thank your for this comparison. I believe this provides much stronger evidence for your method. - A2. Significance over Perplexity: Your point is accepted. - A3. Negative correlations: While your point is acknowledged, I still find this extremely concerning (probably not for this paper, but more for the field in general). This is an incredibly concerning observation, and I would encourage the authors to investigate it more thoroughly: what is causing the extremely negative correlations? Is there any way to tell based on the benchmark when a method will perform worse than random? - A4: I do agree that most papers do not provide such a correspondence, but there are a few: SRCT and [1]. Furthermore, the fact that other papers do not provide such a correspondence, does not mean that follow-up work should try to strive further and obtain it. Especially since it is impossible to use your method without such a correspondence in actual use cases. Furthermore, "where knowing which dataset is more contaminated is often more important" also requires the scores to match across benchmarks, which the authors have not shown. Indeed, suppose I obtain a score of 0.5 on MMLU and 0.8 on ARC-Challenge. Due to non-existence of a proper corresponde, there is no way of saying that ARC-Challenge is actually more contaminated than MMLU (and whether either is contaminated at all). Therefore, the authors have failed to address this point and I strongly question the practical effect this paper has. - A5: Thank you for computing this. Is there any reason why you did not do this for all models? It seems like a pretty simple computation. Furthermore, I would definitely not say that 0.2,0.4,0.2 is very low, but it is also not very high. My concern remains, especially since the authors have not provided a reason why my argument for correlation with difficulty is incorrect or mistaken. - A6: I personally disagree, as I think it would simplify the presentation if you had only one requirement. However, I acknowledge that this is a matter of personal opinion and will therefore not further push the point. - A7: Yes, the authors confirm my exact concern: if the distribution of seen samples is much more concentrated then the unseen samples, would the problem of contamination detection not become easier? Ideally, the distribution of both seen and unseen samples before training is the same. Otherwise, training on one seen sample might also increase your score on another because they are so similar. - A8: Great, this resolves my concern. [1] https://arxiv.org/abs/2405.16281 --- Reply to Comment 1.1.1: Comment: Thank you for the thorough read of our rebuttal, and probing with insightful questions. We address the remainder ones below. --- _A3. Reason for negative correlation_ We fully agree with the reviewer's view. Accordingly, we have investigated this in more depth. As a case study, we examined the Zlib baseline on the HackerNews benchmark, with a correlation of -0.956. We hypothesize that these negative correlations may stem from the following factors: - Many baseline methods operate at the instance level and then aggregate scores across a dataset. We observed that the overall score distribution appears to be highly similar between small vs. large contamination rate $\lambda$, making the aggregated score relatively insensitive to gradual contamination. In such cases, even a few outliers can disproportionally influence the overall score and introduce non-monotonic behavior. - Upon closer examination, we found that unseen examples can contain a large number of code snippets with predictable formatting, repetitive syntax and tokens, which makes them highly compressible. As a result, even at a low contamination rate $\lambda=0.1$, the Zlib score can be slightly higher than $\lambda=0.9$ (compounded with the outlier effect). We believe our results can inspire follow-up works to carefully re-examine widely used contamination detection methods, which may be worth an extensive study on its own. We thank the reviewer again for the comment. We will make this discussion more prominent in the revised version. --- _A4. Score and contamination rate_ We report below the scores across different benchmarks and different contamination rates. While a perfect correspondence between score and contamination rate remains difficult to establish, we observe that the overall score trends are consistently monotonic across all three benchmarks. | **Contamination Rate** | **WikiMIA** | **Arxivtection** | **BookMIA** | |:---:|:---:|:---:|:---:| | 0.10 | 0.110 | 0.172 | 0.158 | | 0.20 | 0.228 | 0.300 | 0.302 | | 0.30 | 0.382 | 0.437 | 0.425 | | 0.40 | 0.441 | 0.496 | 0.511 | | 0.50 | 0.487 | 0.517 | 0.619 | | 0.60 | 0.597 | 0.638 | 0.722 | | 0.70 | 0.722 | 0.768 | 0.763 | | 0.80 | 0.809 | 0.863 | 0.802 | | 0.90 | 0.920 | 0.918 | 0.832 | --- _A5. Correlation between benchmark difficulty and KDS_ We should have used the wording "relatively low" in our previous response. As suggested, we also computed the correlation for more models, and observed consistent findings (e.g. Llama3.2-1b and Qwen2.5-1.5b having weak correlation). We couldn't afford to run these many inferences (number of benchmarks $\times$ models) in such a short time window, given the limited compute we have to serialize the inference runs. We would like to clarify that difficulty vs. contamination are distinct notions. A benchmark’s difficulty reflects how challenging its questions or text are for human (or model) reasoning. In contrast, contamination reflects whether the exact or near-verbatim content of that benchmark was present in the model’s pretraining corpus. > "_Using a similar argument as the authors, one could argue that training on more difficult samples the model is less used to, would cause more deviation on the sample._" This does not necessarily hold. **A difficult sample may still cause minimal deviation if the model has already seen similar content during pretraining**. Sometimes, complex or niche material—such as advanced math problems or scientific text—can appear on public educational sites, making it possible for a model to memorize them during training. In such cases, despite the sample’s difficulty, the model is already well-prepared to handle it, and fine-tuning will make only minor adjustments. Conversely, even a simple sample can cause a large deviation if it is truly novel to the model—e.g., a common-sense sentence phrased in an unfamiliar way or from an novel domain. What matters is not whether the sample is inherently hard, but whether it introduces a new learning signal. Our method captures this signal by measuring representation shift induced by fine-tuning, which we argue is a more direct proxy for novelty (and hence contamination) than for difficulty. --- _A7. Similarity plot_ We now understand your concern. Yes -- detection would become easier when one distribution becomes more concentrated. However, **we would like to clarify that Figure 2 was constructed primarily for clear illustration purposes**. We intentionally chose a non-i.i.d. configuration to make the effect of fine-tuning on unseen examples more visually prominent, particularly for demonstrating how our method captures representation shifts for those samples. We would like to emphasize that **all of our quantitative results and benchmark evaluations in the main paper are based on the standard i.i.d. setup**, where seen and unseen samples are drawn from the same distribution. This ensures that contamination detection is evaluated under more realistic conditions.
Summary: This paper proposes the Kernel Divergence Score (KDS) as a measure of dataset-level benchmark contamination for LLMs. KDS is computed as the weighted average of pairwise embedding vector changes across finetuning the model under investigation on the benchmark test dataset. KDS is shown to be robust to many design choices, exhibit a high degree of 'monotonicity', i.e. correlation of contamination degree and score value, and consistency, i.e., similar contamination induces similar scores. Claims And Evidence: Key claims: * The proposed KDS has higher(er than baselines) correlation with contamination degree => substantiated with Table 1, although Min-K% performs better on 2/3 benchmarks without requiring weigh-access * The proposed KDS has high consistency, i.e., the same contamination level leads to similar scores => Moderately well substantiated in Figure 3. Standard deviation is very large compared to score value, at low contamination rates. I.e. while Scores are not statistically significantly different for the same contamination rates, they will also not be statistically significantly different for moderate differences in contamination rate. Methods And Evaluation Criteria: * The considered datasets are a reasonable choice, which could be extended with experiments investigating contamination introduced in a fine-tuning step, but this is purely optional. * The proposed evaluation is suitable to establish the utility of KDS as a metric to rank datasets based on their contamination * However, no effort is made to show that it permits the comparison of models (note this is not claimed as an objective) * Similarly, no effort is made to show that it permits to show whether or not a specific model is contaminated with a specific dataset (note this is also not claimed as an objective) * A range of ablation studies is conducted, showing the (lack of) sensitivity of KDS to many design choices Theoretical Claims: Equivalence between (3) and (4) for $\gamma=1$ is the only claim and it is correct. Experimental Designs Or Analyses: * The experimental design is sound and consistent with the claims being made. * Note that some claims which I believe would be important for practical relevance (model comparison and establishing absolute instead of relative contamination) are not made and not analyzed. Supplementary Material: I have skimmed the supplementary material. Relation To Broader Scientific Literature: Given the objectives of the proposed method, a reasonable set of baselines is compared to, which are carefully discussed in the appendix. KDS shows strong performance across all three considered benchmarks. However, Min-K% performs better on 2/3 benchmarks, without requiring open-weights / a fine-tuning step with the difference in performance most likely being practically irrelevant (similar or worse performance of their own method in Table 5 and 6 is considered "near-perfect"). Essential References Not Discussed: An important work on quantifying dataset-level contamination (Dekoninck et al "Constat: Performance-based contamination detection in large language models." NeurIPS'24) is neither referenced nor compared to. Other Strengths And Weaknesses: ### Strengths * LLM evaluation is a topic of crucial importance and benchmark contamination is an important factor. * KDS is conceptually relatively simple (with an even simpler version performing just as well (differences in Euclidean distance -- no soft-gating)) * KDS shows strong monotonicity across a broad range of settings and models ### Weaknesses * Need for Finetuning the model under investigation makes the proposed approach applicable only to open-weight models * KDS only permits a ranking of datasets by their contamination for a given model, no comparison between models or absolute statements about the contamination of a model, severely limiting its practical utility. * Performance improvement over more broadly applicable Min-K% seems practically irrelevant. Other Comments Or Suggestions: * The use of LORA should be mentioned in the main text * Boldening only in the average in Table 1. is misleading and hides the fact that Min-K% outperfroms KDS in 2/3 datasets. * Figure 2 mid mostly shows that seen-seen change less. Can you create a histogram of values for seen-seen, seen-unseen, unseen-unseen (without taking the absolute value preferably)? ## Questions For Authors Questions For Authors: * What is the minimal contamination rate required to show it is >0 with a confidence of 95% and 99%? (This should be computable using the results visualised in Figure 3) * How do scores correlate with contamination across different models? * Can you conduct an experiment, Finetuning with rephrased "seen" data, simulating a setting where the model was trained on rephrased benchmark data? * Why take the absolute value of change in embedding similarity? The provided intuition indicates that similarities should increase for previously unseen data. * Can you conduct an experiment showing the effectiveness of your approach detecting contamination introduced during fine-tuning? (Gradient flow only from answers and data mixed with some background data.) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. Below, we address your concerns in detail. --- _A1. Related work_ We thank the reviewer for pointing out the work by Dekoninck et al. [1], which appears to be highly relevant! It's our oversight at the time of submission and will make sure to discuss it in the updated version. [1] Dekoninck et al "Constat: Performance-based contamination detection in LLMs." NeurIPS 2024. --- _A2. Open weight model_ We acknowledge that KDS currently requires open-weight models. This is a fair point, and we will clearly note this limitation in the paper. However, we'd like to emphasize that focusing on open-weight models is valuable for academic research because they provide transparency **essential for developing a deeper understanding of how contamination manifests inside LLMs**. In future works, adapter-based probing techniques can be explored to extend its principles to proprietary models. --- _A3. Significance w.r.t. min-K%_ While Min-K% works well on easy benchmarks such as WikiMIA and BookMIA, its performance degrades significantly on more challenging datasets. For example, **in the three hardest PILE subsets in Table 5, Min-K% has significantly lower performance than KDS**, as shown below (numbers are Spearman correlation): | PILE | Min-K% | KDS| |-|-|-| | Wikipedia |-0.645 | **0.891** | | Hackernews | -0.955 | **0.897** | | Pile-CC | 0.690 | **0.895** | --- _A4. Practical utility_ We agree that absolute contamination estimation is valuable. However, **a necessary step toward this goal** is to ensure that any proposed contamination scoring function satisfies core correctness criteria—namely, monotonicity and consistency with ground-truth contamination levels in controlled settings. **These conditions ensure that the score behaves predictably and reliably in principle, which is essential before deploying it in practical settings**. To our knowledge, no prior work has investigated this practically important angle. Concerningly, we discovered existing scores do not satisfy these necessary conditions. In contrast, KDS consistently satisfies these conditions under varying benchmarks (including the challenging ones), providing stronger foundations for future method design. Lastly, we would like to emphasize that KDS is also highly practical, because it supports real-world applications such as safety auditing and benchmark selection, where knowing which dataset is more contaminated is often more important. **To further demonstrate the practical utility, we include an in-the-wild evaluation across 11 public benchmarks (Appendix E)**. For these reasons, we believe KDS offers a meaningful and practical step forward for the field. --- _A5. Minimal contamination rate to be regarded as "significantly contaminated"_ The required rate to be considered "significantly contaminated" is **0.10 (p=0.014)** under significance level $\alpha=0.05$, and **0.15 (p=0.004)** under $\alpha=0.01$. --- _A6. Histogram on the values of embedding similarity change_ Since plots cannot be updated or shown at this stage, we provide the histogram as a frequency table below. The table is retrieved from a contamination rate of 0.5. | bins | Seen-Seen | Seen-Unseen | Unseen-Unseen | |:-:|:-:|:-:|:-:| | x < -0.10 | 0 | 202 | 22 | | -0.10 <= x < 0.00 | 21529 | 44112 | 6030 | | 0.00 <= x < 0.10 | 40953 | 80338 | 52270 | | 0.10 <= x | 18 | 348 | 4178 | | **total** | 62500 | 125000 | 62500 | _A7. Embedding similarities should increase for unseen data. Why take the absolute change?_ We would like to clarify that the embedding distances increase (i.e., embedding similarities decreases) for previously unseen data after fine-tuning. This is also evidenced in the table above, where $\log \frac{\Phi(Z)}{\Phi(Z')} = ||Z'_i-Z'_j||^2_2 - ||Z_i-Z_j||^2_2$ is mostly positive for unseen-unseen pairs. We take the absolute value of the change to ensure that we capture the magnitude of deviation, regardless of direction. This makes the score robust to mixed behavior and focuses on detecting how much the geometry shifts. We will clarify this in the final version. --- _A8. Effect of Fine-tuning and Rephrased Data_ Following the suggestion, we fine-tuned "unseen" samples from WikiMIA and Arxivtection, observing KDS increases of 22% for WikiMIA, and 11% for Arxivtection. This shows our scoring approach's ability to effectively capture contamination introduced by fine-tuning. Additionally, we experimented on WikiMIA by mixing unseen data with rephrased "seen" data at varying proportions, yielding a Spearman correlation of -0.791. This outcome is expected, as rephrasing likely blurs the distinction between seen and unseen samples. --- _A9. Other suggestions_ Thank you for all the constructive suggestions! We will incorporate those comments, to (1) mention the use of LoRA in the main paper, (2) bold-face all the best performances in Table 1, and (3) add a histogram based on our response, A6. --- Rebuttal Comment 1.1: Comment: I want to thank the Authors for their detailed response, which has clarified most of my questions. I have adjusted my score accordingly. I would still be curious about the cross-model correlation of KDS with contamination, even if it turns out to be low, and would like to encourage the authors to mention the susceptibility of their approach to simple rephrasing evasion. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for adjusting the score. We observe that correlations across models are generally high—for instance, (bloomz, phi3-small) = 0.99, (phi3-small, qwen2.5-14b) = 0.98, and (llama3.1-8b, qwen2.5-7b) = 0.86. We will also ensure that the final version includes a discussion on rephrasing-based evasion. Thanks again for your support!
Summary: The paper introduces the Kernel Divergence Score (KDS), a method to quantify dataset contamination in LLMs by measuring changes in kernel similarity matrices of sample embeddings before and after fine-tuning. Claims And Evidence: The central claim—that KDS effectively quantifies contamination—is supported by experiments where contamination is simulated by mixing pre-training "seen" data with "unseen" data and fine-tuning. However, a critical limitation is that contamination in real LLMs occurs during pre-training, not via post-hoc fine-tuning. The experiments validate KDS in a synthetic setup but lack evidence that it generalizes to pre-training leakage. The correlation results are convincing for the tested scenarios, but the claim’s broader applicability remains unproven. Methods And Evaluation Criteria: The kernel-based approach is novel, leveraging embedding shifts post-fine-tuning. However, evaluating contamination via fine-tuning conflates two distinct processes: pre-training data leakage and adaptation during fine-tuning. The benchmarks (e.g., WikiMIA) are standard, but their use of fine-tuning to inject contamination may not reflect real-world pre-training leakage. The evaluation criteria (monotonicity, consistency) are appropriate but limited to the authors’ experimental setup. Theoretical Claims: The paper lacks formal theoretical guarantees for KDS. The intuition that fine-tuning impacts unseen data more is plausible but not rigorously proven. The mathematical formulation (Eq. 3–5) is empirically validated but lacks theoretical analysis of why kernel divergence should correlate with contamination in general settings. Experimental Designs Or Analyses: The controlled experiments systematically vary contamination ratios, but the contamination is artificially introduced via fine-tuning, not pre-training. This design choice raises concerns about ecological validity. For instance, real contamination involves memorization during pre-training, which may not manifest the same embedding shifts as fine-tuning. The ablation studies (e.g., kernel function choices) are thorough but do not address this core issue. Supplementary Material: The appendix includes implementation details, ablation studies, and extended evaluations, which are good. Relation To Broader Scientific Literature: The work connects to membership inference attacks (MIA) and dataset contamination detection (e.g., Carlini et al., 2021; Shi et al., 2023). However, prior MIA methods focus on individual sample detection, while KDS operates at the dataset level. Essential References Not Discussed: None. Other Strengths And Weaknesses: The methodology of this paper is interesting, but the experimental part is really hard to convince me yet. Other Comments Or Suggestions: None. Questions For Authors: 1. How does KDS perform when contamination originates from pre-training data rather than fine-tuning? If experiments only use fine-tuning to simulate contamination, the method’s applicability to real-world LLM evaluation (where contamination occurs during pre-training) is unclear. Demonstrating KDS on pre-training leakage would strengthen the claims. 2. Can KDS distinguish between memorization during pre-training vs. adaptation during fine-tuning? The current setup assumes these are equivalent, but they may involve different mechanisms. A response clarifying this would affect the validity of the evaluation. 3. Have the authors tested KDS on LLMs where the pre-training corpus is partially known? This could validate whether KDS correlates with ground-truth contamination in real-world settings. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. Below, we address the key concerns: --- _A1. Clarification on problem setting_ We absolutely agree with your viewpoint that contamination in real LLMs occurs during pre-training! We'd like to clarify that **our central goal is indeed to quantify contamination originating from pre-training, not fine-tuning**. Our methodology and evaluation are designed around pre-training leakage. In particular, we ask this question (Sec 2.1): > Given an LLM $\mathcal{M}$ and a benchmark dataset $\mathcal{D}$ (e.g., WikiMIA), to what extent has this dataset been exposed during pre-training? To address this, our key idea is the following: > If the model $\mathcal{M}$ has already been pre-trained on $\mathcal{D}$, then if we further fine-tune $\mathcal{M}$ on $\mathcal{D}$, the model would have minimal changes in embeddings due to prior exposure during pre-training (vice versa). Hence, we use a lightweight fine-tuning step as a probe to reveal how much of the evaluation dataset was likely seen during pre-training. In other words, **we are not using fine-tuning to inject contamination, but only as a mechanism to probe for the presence of whether $\mathcal{D}$ is used in pre-training**. This design choice aligns with recent work (e.g., Zhang et al. 2025, FSD) that also leverages fine-tuning to surface memorized content—but our approach uniquely leverages embedding-level structural changes, enabling a more holistic dataset-level assessment. --- _A2. "the contamination is artificially introduced via fine-tuning, not pre-training"_ We would like to clarify that **our experimental setup simulates contamination w.r.t. pre-training, which is precisely the phenomenon our method aims to quantify**. Specifically, our controlled experiments leverage datasets such as WikiMIA, with two subsets: seen/unseen in the pre-training corpus of the LLM (e.g., Mistral-7B). By mixing these pre-training seen and unseen subsets at varying proportions, we simulate datasets with known and controllable contamination ratios relative to pre-training—not fine-tuning. --- _A3: “Have the authors tested KDS on LLMs with partially known pre-training corpora?"_ Excellent suggestion! We further evaluated on **Pythia-6.9b**, which is known to be pre-trained on the PILE dataset. The spearman correlation coefficient is **0.999** on the Enron subset. Furthermore, our evaluation includes WikiMIA, BookMIA, and ArxivTection benchmarks, which label samples that are likely to have been seen in general LLM pre-training as "seen". We explicitly simulate varying contamination ratios and show in **Table 1** that **KDS exhibits near-perfect monotonicity and consistency, outperforming all baselines across three datasets.** --- _A4. Theory_ While our paper primarily focuses on providing a practical and robust empirical method for quantifying pre-training contamination, we agree that theoretical analysis is an important direction. Our core intuition—that fine-tuning shifts the embeddings of unseen pre-training data more significantly than seen data—is empirically grounded, as shown in Figure 2, and supported by strong monotonicity and consistency across all benchmarks (Tables 1, 5, 6). Moreover, the mathematical formulation in Eq. (3–5) provides a principled and interpretable construction of KDS, which integrates: - A soft gating function that emphasizes initially similar pairs, and - A distance shift measure capturing how much pairwise embedding geometry is altered after fine-tuning. That said, we acknowledge that a full theoretical analysis of why kernel divergence correlates with contamination remains nontrivial. Such analysis would require carefully defined assumptions about the structure of the embedding space, the dynamics of fine-tuning, and the underlying data distribution. We consider this a promising direction and plan to explore these theoretical foundations in future work. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response, which addressed some of my concerns.
Summary: This paper investigates how to quantify dataset leakage in large language models. The proposed method is inspired by the fact that fine-tuning affects the embedding relationships unseen samples more significantly than those seen samples. The authors propose Kernel Divergence Score, using kernel similarity matrix between the embeddings before and after fine-tuning to compute a score as the degree of dataset leakage. The proposed method achieves best performance against compared baselines. Besides, the authors conduct comprehensive ablation studies to investigate the factors like kernel bandwidth and embedding location. Claims And Evidence: Yes, the claims are well supported by main experiments, ablation studies and visualization. Methods And Evaluation Criteria: Yes, the proposed methods make sense. Theoretical Claims: These is not too much theoretical claims. Experimental Designs Or Analyses: Yes, the experimental designs are make sense and consistent with previous work. Supplementary Material: There is no supplementary Material. Relation To Broader Scientific Literature: Yes, the proposed method is inspired by findings from previous study. Essential References Not Discussed: No. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer J5f8, We sincerely appreciate your positive feedback and the time you've dedicated to reviewing our manuscript. Your insights are invaluable to us. Please let us know if you have any further questions. --- Rebuttal Comment 1.1: Comment: More details. This paper's idea is inspired by the fact that fine-tuning affects the embedding relationships involving unseen samples more significantly than those involving seen samples. As far as I know, I think using this fact to quantify dataset leakage is a clever approach. (Some contamination detection methods are too naive) 1. The experimental results show the effectiveness of the proposed method. 2. The authors conduct extensive ablation studies, which make me feel this work is solid. 3. The writing is easy to understand. 4. These are reasons why I give a positive score. Best regards, Reviewer j5f8
null
null
null
null
null
null
Mixture of Lookup Experts
Accept (oral)
Summary: This paper introduces a new LLM architecture, MoLoE. MoLoE enables the conversion of experts into lookup tables before inference, which does not require computation and is placed outside of VRAM, thereby reducing its VRAM usage to levels comparable to dense models. At the same time, MoLoE only needs to transfer the lookup results into VRAM during inference, without the need to transfer any model parameters, making its extra latency for data transfer negligible. This addresses the issues of high VRAM usage in MoE models and the significant latency caused by expert offloading. Experiments on a range of language understanding datasets demonstrate that MoLoE can achieve performance comparable to, or even better than, MoE models. ## Update After Rebuttal The authors’ response has adequately addressed my key concerns, particularly regarding the long-context optimization strategies and MoE’s inefficiency in single-GPU deployment. I am keeping my rating. Claims And Evidence: The paper claims that MoLoE achieves lower latency and better performance compared to MoE, with the same VRAM usage and number of activated parameters. These claims are supported by experimental results. Methods And Evaluation Criteria: This paper essentially reduces VRAM usage and inference latency at the cost of increased storage overhead outside of VRAM. For scenarios with limited VRAM and sensitivity to latency (such as edge-side large language models), MoLoE holds practical value. Theoretical Claims: This paper is application-oriented and does not involve theoretical proofs. The explanatory formulas and parameters/FLOP calculations presented in the paper are correct. Experimental Designs Or Analyses: The paper evaluates the model on language understanding tasks and compares it with dense and MoE models. The comparison is conducted under the control of the same number of activated parameters and VRAM usage, making it convincing. Supplementary Material: The supplementary material contains only pseudocode and hyperparameter settings. Relation To Broader Scientific Literature: Since this paper aims to address the issue of large parameter sizes in MoE, it seems to align with the goal of expert pruning. Essential References Not Discussed: As mentioned above, due to the similar objectives, expert pruning methods should be discussed, such as [1]. [1] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models, ACL 2024. Other Strengths And Weaknesses: **Strengths:** 1. The motivation is clear, and the method makes sense. This paper introduces a novel approach for reducing the VRAM usage of MoE. 2. The paper demonstrates the effectiveness of MoLoE through comprehensive experiments. **Weaknesses:** 1. In certain settings, MoLoE’s offloaded parameters are significantly more than those of MoE. For example, in the setting with 160M activated parameters, the offloaded parameters of MoLoE-16E are more than seven times those of MoE. As the model scales up, the usability of MoLoE may come into question. 2. Theoretically, when the prompt length is sufficiently long, the lookup results that MoLoE needs to load during pre-filling could surpass the size of MoE's experts. For example, in a model with 1B activated parameters, the MoE-10E expert parameter count is 2.7B (assuming all experts are loaded for each layer and then deleted after computation). On the other hand, MoLoE-4E loads 0.26M parameters per token. This means that when the prompt length exceeds 10k, MoLoE-4E would need to load more parameters than MoE-10E. Other Comments Or Suggestions: See above. Questions For Authors: 1. In Figure 3, why is MoE significantly slower than the dense model, even when ignoring the loading time, despite both having the same number of activated parameters? 2. Since the experts in MoLoE can be reparameterized as LUTs, the specific architecture of the experts during training should not affect the architecture during inference. Is it possible to use modules other than FFN as experts? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address specific concerns and questions below. > Q1. As the model scales up, the usability of MoLoE may come into question. Firstly, the number of LUT parameters that need to be offloaded by our method is $dN|V|$, whereas MoE needs to offload the expert size of $2dND_r$. Since $|V|$ is typically on the order of tens of thousands, the difference between the two is not greater than an order of magnitude. Additionally, the method is designed for edge deployment of LLMs, where the parameter size is constrained by the available computational resources, typically resulting in smaller models. Therefore, the size of the LUTs stored on lower-level storage devices is generally acceptable. > Q2. When the prompt length exceeds 10k, MoLoE-4E would need to load more parameters than MoE-10E In the extreme case of very long texts, the amount of parameters our method loads during the pre-filling phase may indeed be greater than that of MoE. However, during LLM inference, the time required for multi-step decoding remains the dominant factor, meaning the cost during the prefilling phase can be somewhat offset. Additionally, directly loading the expert parameters before reparameterization during the pre-filling phase and using computation to replace loading is another potential solution. > Q3. Why is MoE significantly slower than the dense model? Our implementation is based on the Mixtral code from the Huggingface transformers package. Unlike cluster environments that can utilize expert parallelism, edge-side single-card deployment requires each activated expert to process tokens individually, resulting in lower parallelism. When the batch size is greater than 1, the parallelism is further reduced due to different tokens requiring different experts, leading to slower speeds. It is worth noting that the inefficiency of MoE computation could potentially be addressed in the future through more friendly designs of hardwares or better operator implementations. However, even if the single-card execution efficiency of MoE becomes comparable to that of dense models, our method still maintains a significant advantage in terms of the latency associated with loading parameters. > Q4. Is it possible to use modules other than FFN as experts? In fact, as long as the module does not involve information exchange between tokens, i.e., the module’s input is context-independent, it can serve as an expert in MoLoE. This characteristic is similar to the FFN layer in transformers. Therefore, we adopt FFN as the expert, ensuring that the type of expert is consistent with the MoE model.
Summary: This paper presents MoLoE, a new MoE architecture designed to address the high VRAM usage of traditional MoE models. MoLoE uses the output of the Embedding layer as input for the experts during training. Before inference, it pre-computes the output of each expert for each token ID, reparameterizing the experts as LUTs. By offloading the LUTs to other storage devices, MoLoE reduces the memory overhead of the experts while introducing negligible data transfer latency, and achieves performance superior to MoE with the same number of activated parameters. ## update after rebuttal Thanks for addressing my questions, I keep my rating as accept. Claims And Evidence: The paper primarily makes the following claims: 1. MoE faces deployment challenges, including high VRAM usage or significant offload latency. 2. MoLoE can reparameterize the experts during the inference stage, resulting in lower VRAM usage and inference latency. 3. MoLoE achieves performance comparable to or better than MoE. These claims are either theoretically valid or can be empirically demonstrated through experiments. Methods And Evaluation Criteria: The proposed method reduces data transfer by several orders of magnitude compared to MoE, without sacrificing performance, while maintaining the same number of activated parameters and VRAM usage. This aligns with the objectives of the paper. However, a side effect of this approach is the need to maintain large LUTs in external storage devices, which may limit the applicability of the model in certain scenarios. Theoretical Claims: The paper does not introduce particularly complex techniques, and its effectiveness is primarily demonstrated through experimental results. The ablation study in Table 7 effectively highlights the validity of the model design. Experimental Designs Or Analyses: The experiments are conducted on 8 tasks, with dense and MoE models used as baselines for comparison. The metrics include accuracy, number of activated parameters, number of parameters during training, offloaded parameters, and number of data transferred. The experiments demonstrate the validity and effectiveness of the proposed method. Supplementary Material: The paper does not provide separate supplementary material. Relation To Broader Scientific Literature: The method seems to be related to reparameterizable network architectures in computer vision, as both approaches simplify the model by reparameterizing after training. For example: + RepVGG: Making VGG-style ConvNets Great Again Essential References Not Discussed: Although it can be considered as concurrent work, the authors are encouraged to include the latest papers from ICLR 2025 in the related work section. + MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts + TC-MoE: Augmenting Mixture of Experts with Ternary Expert Choice Other Strengths And Weaknesses: Strengths: + MoLoE addresses the challenges of MoE deployment -- high VRAM requirements and the large delays associated with offloading experts, offering practical value. + The experiments demonstrate that the model achieves better performance than MoE while reducing the number of transferred parameters by over a thousand times. Weaknesses: + As a trade-off for the strengths mentioned above, the model needs to store larger LUTs on storage devices, potentially several times larger than those of MoE, which may limit the applicability of the model in certain scenarios. Other Comments Or Suggestions: + Add citations to the papers mentioned above. Questions For Authors: + Are there other ways to reduce the size of the LUT, such as pruning LUT entries corresponding to rare token IDs? + Why did the MoE baselines not use shared experts? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address specific concerns and questions below. > Q1. Pruning LUT entries corresponding to rare token IDs. We prune half entries of the LUTs of MoLoE-16E with 160M activated parameters based on frequency. When the input token ID is pruned, the LUTs return a zero vector. The results are as follows: |Model|#offloaded param|#activated param|ARC-C|ARC-E|BoolQ|HellaSwag|PIQA|RACE|SIQA|LAMBADA|average| |-|-|-|-|-|-|-|-|-|-|-|-| |MoLoE-16E|7.4B|160M|22.4|48.6|60.3|32.7|68.3|30.9|38.6|33.3|41.9| |MoLoE-16E-pruned|3.7B|160M|21.7|44.8|61.8|31.2|64.5|28.8|39.3|26.6|39.8| From the experimental results, it is evident that post-training LUT pruning significantly degrades model performance. If smaller LUTs are required, using a more compact vocabulary during the training phase is a better choice. > Q2. Why did the MoE baselines not use shared experts? The different settings for MoE and MoLoE are chosen in order to prioritize aligning both **total parameter count in training** and **activated parameter count in inference** for a fair comparison. + MoLoE adopts shared experts because, unlike MoE, MoLoE reparameterizes all routed experts into LUTs during inference. If shared experts were not used, MoLoE would not include any FFN layers during inference, which would make it difficult to align its activated parameter count (VRAM usage) with that of dense and MoE models. + The reason MoE does not adopt shared experts is that, in our experiments, MoE without shared experts performs better, which has also been thoroughly validated in [1]. We provide our experimental results below (No.1 vs No.2). + In inference stage, MoLoE only has 1 shared expert, while MoE activates 2 experts. Therefore, to make a fair comparison, we set the size of each expert in MoE to half of that in MoLoE (to align activated parameter count in inference), but double the number of experts (to align total parameter count in training). As a result, the total number of experts cannot be aligned (No.2 vs No.3). But as a reference, we have still provided the results below for MoLoE and MoE both using 1 shared expert and 33 routed experts (No.2 vs No.4). |No.|Model|#total param in training|#activated param|ARC-C|ARC-E|BoolQ|HellaSwag|PIQA|RACE|SIQA|LAMBADA|average| |-|-|-|-|-|-|-|-|-|-|-|-|-| |1|MoE (0 shared, top-2 from 34 routed)|1.1B|160M|20.5|50.0|57.5|34.5|67.3|28.6|39.9|36.4|41.8| |2|MoE (1 shared, top-1 from 33 routed)|1.1B|160M|20.6|48.4|57.3|31.5|65.2|30.5|37.9|34.7|40.8| |3|MoLoE (1 shared, 16 routed)|1.1B|160M|22.4|48.6|60.3|32.7|68.3|30.9|38.6|33.3|41.9| |4|MoLoE (1 shared, 33 routed)|2.1B|160M|22.0|52.4|59.3|33.5|68.3|30.7|38.8|33.1|42.2| [1] OLMoE: Open Mixture-of-Experts Language Models. ICLR 2025 > Q3. Add citations to the papers mentioned above. Thank you for your suggestion, we will update it in the revised version.
Summary: This paper proposes MoLoE architecture to address the high memory overhead of MoE architectures. The key difference is that MoLoE converts experts into external LUTs before inference, eliminating expert computation and allowing experts to be stored outside of VRAM. Additionally, since only the lookup results need to be transferred to the GPU, MoLoE nearly eliminates the parameter transmission overhead associated with offloading. Experimental results demonstrate that MoLoE achieves competitive performance with MoE while maintaining latency and VRAM usage comparable to dense models. Claims And Evidence: The claims made in the paper are supported by experiments. Methods And Evaluation Criteria: MoLoE meets the requirements of the intended application scenarios. The benchmarks used in the paper are commonly adopted for LLM evaluation. Theoretical Claims: No theoretical proof is presented. Experimental Designs Or Analyses: The experimental section includes comparisons with MoE and dense models, as well as ablation studies, which are sound and valid. Supplementary Material: I have checked the code and configuration in the appendix. Relation To Broader Scientific Literature: MoLoE discretizes inputs into a finite set by using embeddings as expert inputs, which is conceptually related to quantization methods. Input quantization could be a potential alternative solution for MoLoE. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Pros: 1. This paper focuses on the challenges MoE faces during deployment, and MoLoE offers a novel and practical solution. 2. The writing is clear. 3. The experiments are thorough and convincing. Cons: 1. MoLoE has more offloaded parameters than MoE. Although off-device storage is larger, this still requires additional deployment overhead. 2. MoLoE activates all expert parameters during each training step, which means that during training, the FLOPs of MoLoE will be higher than that of MoE with the same number of parameters. Other Comments Or Suggestions: N/A Questions For Authors: 1. When the number of parameters is the same during the training phase, will MoLoE require more training time than MoE? 2. Intuitively, using embeddings as expert inputs would limit the experts to only accepting context-independent, shallow features. So, why is MoLoE able to perform better than MoE? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address specific concerns and questions below. > Q1. Will MoLoE require more training time than MoE? Since MoLoE activates all parameters during training, its training cost is approximately equal to that of a dense model with the same number of parameters, and higher than that of a sparse MoE model with the same parameter count. However, it should be noted that our method is designed for edge deployment of LLMs, where the parameter size is limited by the computational resources available on the edge, typically resulting in smaller models. Therefore, the cost of training these smaller-scale LLMs on a cluster remains manageable. > Q2. Why is MoLoE able to perform better than MoE? + Although the experts' inputs do not contain context information, they can still indirectly influence the model's handling of context by altering the input to subsequent attention layers. Additionally, the retained shared expert operates similarly to MoE and dense models, where it takes the output from the previous layer as its input, and thus can also handle context-related information. + By changing the expert inputs to embedding tokens, the routed experts can be reparameterized as LUTs, resulting in almost zero computation during inference. This allows us to activate all experts without incurring additional costs, which helps recover some of the performance loss caused by altering the expert inputs (see Table 7). Additionally, since activating all experts does not require auxiliary losses, the training and inference objectives are more aligned, leading to improved model performance (see Table 4).
Summary: The paper proposes Mixture of Lookup Experts (MoLoE), a new variation of Mixture-of-Experts (MoE) architectures that significantly reduces GPU inference latency for batched generation. Claims And Evidence: I think the claims and evidence are good. Methods And Evaluation Criteria: Something is not aligned with the baselines. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: I checked the experimental designs and analyses. Overall, it is good, but it needs more alignment and baselines. Supplementary Material: Yes. Pseudo code as well as hyperparameter setting. Relation To Broader Scientific Literature: This work mainly focuses on the loading latency for MoE models in the offloading scenario. Essential References Not Discussed: I think it's better to compared with Hwang, Ranggi, et al. "Pre-gated moe: An algorithm-system co-design for fast and scalable mixture-of-expert inference." 2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA). IEEE, 2024. Other Strengths And Weaknesses: **Strengths:** 1. Introduce the LUT to combine the advantages of the Dense Model (Balance Training) and MoE architecture (Parameter Scaling). 2. Training all experts simultaneously removes the need for auxiliary losses, simplifying training and improving efficiency. **Weaknesses:** 1. In Table 3, the comparison between MoE (0 shared, top-2 routing) and MoLoE (1 shared, 4 routed) is inconsistent. I think a direct comparison under identical conditions would strengthen the evaluation. 2. I have concerns about the context information in your MoLoE model. After being re-parameterized, during the inference, the input ID and output are fixed for each token. As Table 3 shows, performance under LAMBADA is weaker than MoE. Could you provide results under complex language modeling tasks like WikiText or Winogrande? 3. As I understand, the tokenizer is important for the LUT's construction. Have different tokenizer settings been explored? An ablation study or discussion regarding tokenizer will be better. Other Comments Or Suggestions: I think some system-level optimization work should be discussed or as a baseline. e.g. Hwang, Ranggi, et al. "Pre-gated moe: An algorithm-system co-design for fast and scalable mixture-of-expert inference." 2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA). IEEE, 2024. Questions For Authors: See Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. We address specific concerns and questions below. > Q1. The comparison between MoE and MoLoE is inconsistent. The different settings for MoE and MoLoE are chosen in order to prioritize aligning both **total parameter count in training** and **activated parameter count in inference** for a fair comparison. + MoLoE adopts shared experts because, unlike MoE, MoLoE reparameterizes all routed experts into LUTs during inference. If shared experts were not used, MoLoE would not include any FFN layers during inference, which would make it difficult to align its activated parameter count (VRAM usage) with that of dense and MoE models. + The reason MoE does not adopt shared experts is that, in our experiments, MoE without shared experts performs better, which has also been thoroughly validated in [1]. We provide our experimental results below (No.1 vs No.2). + In inference stage, MoLoE only has 1 shared expert, while MoE activates 2 experts. Therefore, to make a fair comparison, we set the size of each expert in MoE to half of that in MoLoE (to align activated parameter count in inference), but double the number of experts (to align total parameter count in training). As a result, the total number of experts cannot be aligned (No.2 vs No.3). But as a reference, we have still provided the results below for MoLoE and MoE both using 1 shared expert and 33 routed experts (No.2 vs No.4). |No.|Model|#total param in training|#activated param in inference|ARC-C|ARC-E|BoolQ|HellaSwag|PIQA|RACE|SIQA|LAMBADA|average| |-|-|-|-|-|-|-|-|-|-|-|-|-| |1|MoE (0 shared, top-2 from 34 routed)|1.1B|160M|20.5|50.0|57.5|34.5|67.3|28.6|39.9|36.4|41.8| |2|MoE (1 shared, top-1 from 33 routed)|1.1B|160M|20.6|48.4|57.3|31.5|65.2|30.5|37.9|34.7|40.8| |3|MoLoE (1 shared, 16 routed)|1.1B|160M|22.4|48.6|60.3|32.7|68.3|30.9|38.6|33.3|41.9| |4|MoLoE (1 shared, 33 routed)|2.1B|160M|22.0|52.4|59.3|33.5|68.3|30.7|38.8|33.1|42.2| [1] OLMoE: Open Mixture-of-Experts Language Models. ICLR 2025 > Q2. Results under complex language modeling tasks like WikiText or Winogrande. We reported the results for a 1B activated parameter model on WikiText and Winogrande. MoLoE outperforms MoE on Winogrande, but performs slightly worse on WikiText. |Model|#Param Loaded per Token|WikiText (ppl↓)|Winogrande (acc↑)| |-|-|-|-| |Dense|0|17.7|54.0| |MoE-10E|537M|15.6|55.3| |MoLoE-4E|0.26M|16.2|56.5| Although the routed experts in MoLoE do not directly receive context information, they can still indirectly influence the model's handling of context by altering the input to subsequent attention layers. Additionally, the retained shared expert operates similarly to MoE and dense models, taking the output from the previous layer as its input, and thus can also handle context-related information. > Q3. Different tokenizer settings. In addition to Pythia's tokenizer with a vocabulary size of 50k that we used, we also tried the 32k vocabulary tokenizer from Mixtral. The results are shown below. |Model|#offloaded param|#activated param|#Param Loaded per Token|8-task average| |-|-|-|-|-| |MoE-10E (Pythia tokenizer)|0.3B|160M|57M|40.3| |MoLoE-4E (Pythia tokenizer)|1.8B|160M|0.037M|40.8| |MoE-10E (Mixtral tokenizer)|0.3B|134M|57M|39.8| |MoLoE-4E (Mixtral tokenizer)|1.2B|134M|0.037M|40.2| The primary impact of the tokenizer on MoLoE is that its vocabulary size is proportional to the size of the LUTs. However, since different tokenizers also affect the performance of all kinds of models, it is difficult to isolate the tokenizer's specific effect on MoLoE's performance. Nonetheless, at least from the experimental results, we can observe that MoLoE is effective under different tokenizers. > Q4. Compared with Pre-gated MoE. Pre-gated MoE allows for the parallelization of expert prefetching and computation. Compared to this approach, our method has the following advantages: + Pre-gated MoE does not increase inference latency under the assumption that the time to load parameters is less than or equal to the computation time. However, during batch decoding, since different tokens require different experts, the number of experts to prefetch increases (in the worst case, all experts need to be loaded), which in turn increases the latency. + Additionally, when deployed on edge devices, it may be necessary to offload to lower-tier storage devices such as SSDs, where the transfer bandwidth is much smaller, causing the time to load parameters to be significantly higher than computation time, leading to a substantial increase in overall latency. + In contrast, our MoLoE fundamentally reduces the amount of data that needs to be transferred by more than three orders of magnitude, thus avoiding additional latency during decoding. The two methods actually offer two different solutions in distinct directions.
null
null
null
null
null
null
Revisiting Chain-of-Thought in Code Generation: Do Language Models Need to Learn Reasoning before Coding?
Accept (poster)
Summary: The paper explores the impact of the data formatting when finetuning LLMs with synthetically generated data. Specifically, the data--composed of code and corresponding reasoning steps for a NL-to-code problem--is generated by a stronger "teacher" model (the authors state that it is a DeepSeek model in Appendix B, but do not specify which one). The data is then formatted in different ways by either putting the reasoning before or after the code snippet, or leaving it out altogether. The authors find that, somewhat surprisingly, including the reasoning before the code snippet does not seem to benefit downstream performance. They then go on to argue that including it *after* the code snippet may in fact benefit the SFT'd model the most. ## update after rebuttal I thank the authors for a productive and engaging rebuttal discussion, which has certainly strengthened the findings of the paper. In particular, the addition of proper statistical testing, which should have been included in the original paper given the many possible confounding factors present, was much appreciated. In the end, I have decided to nonetheless keep my original score of 2 (Weak Reject). If this is surprising to the authors then suffice it to say that I sympathize with them regarding the difficulty of presenting convincing evidence for counterintuitive findings. In its current form, the paper has me convinced that there is something amiss in the way we think about combining reasoning and code, but what exactly that is I am not convinced of. I note that the authors surely have tried to answer this question from a bunch of different angles, but none of them seem to have shed much light on the issue; in particular, none of the experiments in Section 6.1 seem quite insightful enough to me. I will note to the AC that my vote for rejection is indeed to be taken to be quite weak; if the other reviewers are happy for the paper to be accepted in its current form then I will not vehemently oppose it. I simply think that the authors seem to have struck upon something deeply surprising and interesting, and with a few more months would likely be able to uncover what the cause is in much greater and more illuminating detail, ultimately leading to a much stronger contribution to the literature. As is, the paper will simply raise a few eyebrows, but not offer any lasting insights (in my opinion). Claims And Evidence: I do not think the claim/observation that formatting the data such that the reasoning follows the code benefits performance is supported by the evidence. This findings appears a bit hand-picked; there are many experiments in which, for example, Code w/o CoT does better than $C_\text{precede}$, but this is not even acknowledged (Fig 10, right, high temps; Fig 9, left; Fig 7, right; Fig 2, llama). The large number of completely unrelated experiments, many of which are then not even discussed much in the text, does not inspire a great deal of confidence in the results being statistically significant, either. For example, the discussion of the last experiment is simply "we aim to gain insight into the degree of semantic match correspondence and the extent to which the generated outputs adhere to the given instructions." That just says what you *wanted* the experiment to tell you; it says absolutely nothing about what you then actually found, and indeed the figure does not seem particularly telling either. Methods And Evaluation Criteria: The dataset, model choice and metrics all make sense to me. In particular, it's good that the authors included leakage-resistant benchmarks like LiveCodeBench. Their findings would have been strengthened somewhat by trying a different teacher model, too, although it is understandable that this may not have been in scope in terms of compute costs. Theoretical Claims: N/A; no theoretical claims or proofs. Experimental Designs Or Analyses: Evaluating different data formats for SFT by simply doing SFT with a few different base models makes sense. This is the core experiment in the paper, which I am on board with. However, there are many, many "side experiments" that I do not really understand the relevance of. Take for example the data mixing experiment: training on data is inconsistently formatted hurts performance. Is that really a finding? What does that tell you about how to format your data? To my eyes, this is a complete non-result. The worst case of this at the end of P.7, when the authors "evaluate how varying the number of samples `k` in `pass@k` affects performance for the different methods) (Figure 10, left), and conclude that: "As k increases, all configurations demonstrate an improvement in accuracy, indicating models exploring more possibilities and increasing the chances of producing correct outputs". This doesn't even make any sense as an experiment; pass@k as a metric is monotonically non-decreasing by construction, since if you draw more samples i.i.d. the probability that at least one of them is correct cannot go down. This is a very concerning methodological misunderstanding and raises serious suspicion as to the validity of the other experiments, too. Supplementary Material: I reviewed Appendices A-D to make sure I understood the pipeline fully. I did not feel the need to explore the rest of the appendices. Relation To Broader Scientific Literature: The finding that including reasoning steps when distilling from a bigger model to a smaller code-specific model does not seem to benefit downstream performance is somewhat counter-intuitive, and I think an interesting contribution. This goes against the grain of recent findings that suggest that using SFT to distill CoT reasoning discovered with RL into a smaller model can help in math and reasoning domains (c.f. the DeepSeek-R1 paper). Essential References Not Discussed: I was surprised that Austin et al's "Program Synthesis with Large Language Models" (2021) was not cited as an example of early use of LLMs for code, since it is typically looked at as the seminal paper in the field. More importantly, there does not appear to be a citation for pass@k, even though the unbiased estimate employed by the authors was first introduced in the Codex paper ("Evaluating Large Language Models Trained on Code", Chen et al., 2021). I believe the original idea of a pass@k metric is due to Kulal et al.'s "SPoC: Search-based Pseudocode to Code" (2019). Other Strengths And Weaknesses: I appreciate that this paper is, at its core, contributing a non-obvious and somewhat counterintuitive finding: that including the reasoning steps when distilling the ability to generate code from a larger model to a smaller model does not appear to make a measurable difference. That is, to me, a valuable contribution; however, it is hard to tell how much that depends on the teacher model (which they keep fixed). Other Comments Or Suggestions: None. Questions For Authors: 1. Do you think the core finding--that including reasoning is not necessary when distilling from one code llm to another--would translate to different base/teacher models, too? 2. Why do you think the effects of varying the number of samples or the sampling temperature are indicative of the different SFT'd models' underlying capacity? 3. Do you think your supposed finding that putting the reasoning after the code is better than before (or not including it at all) is statistically significant? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for helpful comments! We would like to summarize your concerns and provide our responses below: 1. **which is the teacher model**: We select **DeepSeek-V2.5-1210** as our teacher model because of its strong capabilities and acceptable cost. It is the most suitable version available during our primary experiment phase. 2. **different teacher model**: We take **GPT-4o-2024-08-06** as another teacher model to synthesize the CoT&Code dataset again. We perform experiments on DeepSeek-Coder-6.7B-Base using the methods mentioned in the paper and evaluate the model on EvalPlus with the following results: | | HumanEval Base | HumanEval Plus | MBPP base | MBPP Plus | Avg. | | ------------- | :------------- | -------------- | --------- | --------- | ----- | | $C_{w/o}$ | 71.95 | 65.85 | 77.77 | 67.46 | 70.76 | | $C_{follow}$ | 65.24 | 59.14 | 76.45 | 62.96 | 65.95 | | $C_{precede}$ | 72.56 | 66.46 | 78.57 | 67.72 | 71.33 | 3. **whether the conclusion contradicts the results of DeepSeek-R1**: Following the release of DeepSeek-R1, we have reviewed our experimental results and believe that our conclusions are indeed consistent with, rather than contradictory, their work. We agree that a proper CoT process is essential for deriving accurate answers, especially in mathematical and logical domains. Instead, we are against the suggestion that CoT processes must be expressed through natural language. Well-designed and high-quality code should also be recognized as a valid form of CoT reasoning, and models do not need to learn to reason through natural language but can learn to code directly and then explain. Thus, our study complements rather than contradicts the results of DeepSeek-R1. To further support this, we take the open source code generation dataset synthesized with **DeepSeek-R1** and directly treat the think and output of R1 as CoT and result, respectively. We fine-tune the models and evaluate them with the same setting in **2**: | | HumanEval Base | HumanEval Plus | MBPP Base | MBPP Plus | Avg. | | ------------- | :------------- | -------------- | --------- | --------- | ----- | | $C_{w/o}$ | 64.63 | 56.70 | 71.16 | 62.43 | 63.73 | | $C_{follow}$ | 56.09 | 51.82 | 64.55 | 55.02 | 56.87 | | $C_{precede}$ | 64.02 | 57.31 | 71.69 | 62.43 | 63.86 | We note that, as mentioned in other recent studies of R1, there is a certain probability that the model will fail to stop thinking when using $C_{follow}$ format. 4. **whether the evidence sufficiently supports the claim**: We thank the reviewer for raising this point. Indeed, this is consistent with our first argument in the paper, that well-organized and high-quality code can effectively serve as a CoT. Thus, it is reasonable that in some cases the model may have learned enough from the code alone. However, this phenomenon is not universally observed in our experiments. In particular, the explanations following the code added value in most scenarios. Therefore, we hope that the reviewer will notice the second finding, namely that placing the CoT after the code generally provides an additional benefit in most experimental settings, which builds on our initial proposal and is mainly observed. We adhere to standard settings using sufficiently large model and dataset sizes, which ensures stable experimental results even without including variance. Furthermore, we have presented extensive experimental results across various configurations in Appendix G. We appreciate your suggestion and plan to report the statistical result in a future version. 5. **discussion of pass@k and tempature**: Different k values reflect different aspects of model performance. If model A gives [10, 0, 0] and model B gives [4, 3, 3] over three problems with 10 attempts, both have the same pass@1, but B has a higher pass@10. Small k captures the model's confidence (A solves a problem very well), while larger k reflects the coverage of the model's ability (B can solve more problems). Therefore, varying k reveals different evaluations of model behavior and potential. In addition, temperature also affects the model's performance, with low temperatures tending to make the model's output more fixed and high temperatures encouraging more explore space. We want to show that the conclusions are consistent. 6. **brief description in discussion**: We apologize for any confusion caused by our brief description of the experiments. We confirm that each experiment was carefully designed with reliable conclusions, and we will make the meaning of the discussions more explicit in future versions. 7. **missing references**: We appreciate the reviewers for pointing out the omissions in related work, and we promise that these relevant references will definitely be included. --- Rebuttal Comment 1.1: Comment: Thank you for responding to my questions. After reviewing the reply to my review, as well as those to the others, I have decided to keep my score. The text as is does not sufficiently justify such strong conclusions, in my opinion; as emphasized also by ueRf, without more thorough statistical analysis it is difficult to believe that the results are significant. Furthermore, while I appreciate your reply, it does not explain the discussion that is in the submitted text. In particular, your explanation of why measuring pass@k, which basically boils down to measuring diversity, makes sense. However, the paper just contains this discussion: ``` As k increases, all configurations demonstrate an improvement in accuracy, indicating models exploring more possibilities and increasing the chances of producing correct outputs. Among the configurations, C_precede consistently outperforms others, emphasizing the importance of positioning CoT explanation. ``` The first observation is then, according to my interpretation of your reply, just saying that the plots suggest that the model has not completely collapsed to a single mode, so drawing more samples leads to increasing (rather than just non-decreasing) pass rates. This I do not believe to be a particularly insightful comment. The second observation is not even correct: looking at the figure, C_w/o has a higher pass@1 than C_precede, for example. This makes your claims appear a bit cherry picked, to be frank. On the whole, I think the experiments are thorough and the findings (if true) are interesting. However, as it stands I do not think the paper does a good enough job providing evidence in their favor. For future versions of this work I would suggest the authors to consider distilling their experiments into a smaller set of findings which are backed up by proper statistical due diligence. --- Reply to Comment 1.1.1: Comment: Thanks for the helpful response! We will summarize your concerns and provide our responses below: 1. **Concerns about the discussion of pass@k**: We sincerely thank you for your insightful comments. There was a typo in the presentation that led to loose language: > As k increases, ..., emphasizing the importance of positioning CoT explanations. Our intended message is to claim that different k values correspond to different evaluation settings, allowing us to validate our conclusions' generalizability. To prevent confusion and unnecessary misunderstanding, we will revise it in the next version as: > We change the k-values when evaluating the models to validate the generalizability of our conclusions. The results show that C_precede mostly outperforms the others, highlighting the importance of positioning the CoT explanation. 2. **distilling experiments**: All discussions and experiments are designed to support our main conclusion. Because this conclusion is counterintuitive, we conduct experiments from multiple perspectives to elucidate its underlying rationale. Due to space constraints, our explanations are somewhat brief. We will reorganize the discussions and provide additional clarifications to avoid misunderstandings in the future. 3. **Significance Analysis:** We fully agree about the importance of significance analysis, and conduct a series of experiments to validate our conclusions. We evaluate the performance of the model on the EvalPlus benchmark and use t-test to show the statistical significance. - We run five experiments on the synthetic dataset with different random seeds and fine-tune the DeepSeek-Coder-6.7B-Base model: | Seed | 0 | 1 | 2 | 3 | 4 | Avg. | | ------------- | ----- | ----- | ----- | ----- | ----- | ---------- | | $C_{w/o}$ | 69.19 | 70.06 | 70.34 | 70.48 | 70.83 | 70.17±0.62 | | $C_{follow}$ | 64.88 | 66.54 | 66.03 | 66.38 | 65.84 | 65.93±0.65 | | $C_{precede}$ | 71.49 | 71.18 | 71.82 | 71.14 | 70.96 | 71.31±0.34 | The p-values (0.0069, 0.0000) are < 0.05, showing that $C_{precede}$ has a significant performance improvement over the baselines $C_{w/o}$ and $C_{follow}$. - We run five experiments on the synthetic dataset with different random seeds and fine-tune the Llama-3.1-8B-Base model: | Seed | 0 | 1 | 2 | 3 | 4 | Avg. | | ------------- | ----- | ----- | ----- | ----- | ----- | ---------- | | $C_{w/o}$ | 60.47 | 60.98 | 60.6 | 60.61 | 59.84 | 60.50±0.41 | | $C_{follow}$ | 58.79 | 59.38 | 60.1 | 59.1 | 59.65 | 59.40±0.51 | | $C_{precede}$ | 61.68 | 60.96 | 60.67 | 62.01 | 61.23 | 61.30±0.54 | The p-values (0.0290, 0.0004) are < 0.05, showing that $C_{precede}$ has a significant performance improvement over the baselines $C_{w/o}$ and $C_{follow}$. - We take **GPT-4o-2024-08-06** as another teacher model to synthesize the dataset again. We run five experiments on the new dataset with different random seeds and fine-tune the DeepSeek-Coder-6.7B-Base model: | Seed | 0 | 1 | 2 | 3 | 4 | Avg. | | ------------- | ----- | ----- | ----- | ----- | ----- | ---------- | | $C_{w/o}$ | 70.76 | 70.89 | 69.96 | 71.51 | 71.2 | 70.86±0.58 | | $C_{follow}$ | 65.95 | 66.98 | 66.67 | 65.93 | 66.39 | 66.38±0.46 | | $C_{precede}$ | 71.33 | 72.4 | 72.18 | 71.96 | 72.58 | 72.09±0.48 | The p-values (0.0068, 0.0000) are < 0.05, showing that $C_{precede}$ has a significant performance improvement over the baselines $C_{w/o}$ and $C_{follow}$. - We directly use educational instructions from OpenCoder to synthesize the dataset without giving the reference code. We run five experiments on the new dataset with different random seeds and fine-tune the DeepSeek-Coder-6.7B-Base model: | Seed | 0 | 1 | 2 | 3 | 4 | Avg. | | ------------- | ----- | ----- | ----- | ----- | ----- | ---------- | | $C_{w/o}$ | 70.5 | 70.45 | 70.04 | 70.36 | 70.54 | 70.38±0.20 | | $C_{follow}$ | 66.29 | 66.94 | 66.13 | 67.44 | 65.54 | 66.47±0.74 | | $C_{precede}$ | 71.17 | 72.34 | 70.93 | 71.23 | 71.84 | 71.50±0.58 | The p-values (0.0033, 0.0000) are < 0.05, showing that $C_{precede}$ has a significant performance improvement over the baselines $C_{w/o}$ and $C_{follow}$.
Summary: This paper researches on the order of CoT and code in code generation tasks. The authors collect pairs of program questions and response codes in six code data, and then prompt an LM to generate CoT and finally create a dataset with question, code and natural language CoT triplets. Their experiments show that the (question, code, CoT) (that is code precede) surpasses (question, CoT, code) (that is code follow), and they also have many observation about the difference between these two patterns. shed light into how to train a code generation model. ## Update after rebuttal Experiments with larger models, longer training stage, other CoT source and additional study about the difficulty have partly solved my concerns but I am still not sure whether the conclusions are still correct for most powerful models (like models with >100B parameters and billions of training tokens). Claims And Evidence: The main claim of the paper is that the code-precede CoT outperforms code-follow CoT in code generation tasks. This claim is supported by experiments on various datasets (e.g., HumanEval, MBPP, LiveCodeBench) and models (e.g., DeepSeek-Coder-6.7B, Llama-3.1-8B, Qwen2.5-chat-7B, Qwen2.5-coder-7B). While the results are convincing, with a consistent and large gap between the two approaches, I believe the absence of variance analysis in the figures and tables is a minor issue. Including such analysis would make the results more robust. However, I find the authors' explanation "code itself is a suitable CoT, while traditional CoT should be regarded as an explanation of the code rather than reasoning" to be insufficiently supported. I have several concerns: 1. **Task Difficulty**: While the pretrained code corpus is diverse and the LeetCode experiments discuss about different difficulty levels, it's unclear whether the difficulty of the code generation tasks is sufficiently varied. Task complexity ranges from simple translation of instructions to code, to solving complex algorithmic problems or ambiguous GitHub issues. I believe task difficulty significantly impacts the role of code and CoT. For simpler tasks, CoT may act mainly as a post-explanation, but for more complex tasks, defining the problem clearly, choosing the right algorithm, and debugging are crucial parts of the CoT, and these cannot be fully replaced by the code itself. Although the authors provide experiments on Leetcode with varying difficulty, they focus only on code problem complexity. Additional factors like the clarity or completeness of the question, or code length, should also be considered. At the same time, all the four method shows <10% performance on the Leetcode Hard subset, this results may not be very convincing. 2. **Nature of the Phenomenon**: The authors argue that "code itself is a suitable CoT", but it is unclear whether this phenomenon is inherent to code generation tasks or a result of differing difficulty between generating code and CoT. Since code is a formal language, it might be easier for the model to learn than natural language CoT. Further supporting my concerns, Figure 3 shows that the perplexity (ppl) on CoT tokens is significantly higher than that on code tokens, both in the code-precede and code-follow models. This suggests the model is less familiar with natural language CoT. In such cases, CoT might not be helpful for the model. If so, it is doubtful whether a sufficiently powerful model would have yielded different conclusions. After all, being harder does not necessarily mean ultimately worse performance. 3. **Training Dataset and Process**: The training dataset is relatively small (51,200 samples), and the training process only involves SFT. This is much smaller than real-world datasets. For example, DeepSeek-Coder-Instruct uses 2 billion tokens to fine-tune natural language abilities. The paper does not address whether the conclusions would still hold with a larger training set, or if the code-follow approach might perform better with more training data. 4. **Bias from Synthetic CoT**: The synthetic CoT in the experiments is generated by LLM, conditioned on the question and code, which makes it more of an explanation of the code rather than a reasoning process. The authors do not explore whether the results would change if the CoT were generated based on the question and before the code generation, or if it were generated as a human-like reasoning process. In real-world scenarios, CoT typically consists of two parts: one before the code, offering general instructions on how to solve the problem, and one after, explaining the code in detail. The former is more like a reasoning process, and the latter is a code explanation. I believe this distinction should serve as a baseline for comparison. Methods And Evaluation Criteria: They create datasets with a classic method by using an LLM to generate CoT. There is a concern that the synthetic CoT could introduce bias to the conclusion, as I claimed in the above point 5. They do not provide a new evaluation metrics but use the commonly used pass@k. Theoretical Claims: There is no theoretical claims. Experimental Designs Or Analyses: Except for the main results and the concerns I mentioned above. There are many experiments used to explain the different roles of CoT and code in section 6 Discussion. I feel some of them are not necessary, or the authors should explain their motivation more clearly. For example, why the "layer gradient norm", "adversarial code robustness", and "signature removal analysis" are necessary to explain the conclusion. The subsection "Generalization across sizes" can be just moved to the figure 2. The discussion about "temperature" and "pass@k" is too detailed for me and can be moved to the appendix. And some other experiments need to be explained more clear: 1. In figure 4, the reader cannot understand whether the KL-divergence 0.13 is smaller enough or not. 2. There is a lacking of results analysis of Figure 11. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: The observation that code-precede CoT surpasses code-follow CoT is useful to train better code generators, which is an important task in the field of AI. However, I worry about whether the conclusion can be generalized to pretraining stage or on a much larger dataset. If the conclusion is due to the current models are more familiar with code than natural language CoT, or the natural language CoT is naturally harder to learn than code, the current conclusion may not hold when the model is finally trained well on CoT. At the same time, how the conclusion can be used in practice is not clear. Does the conclusion means that the model should be trained to first generate code and then generate CoT? (Or even without CoT? because I find that even the w/o model show not bad performance.) If so, how can we explain the phenomenon that o1-like long reasoning model always show better performance than general ones like gpt-4o? Essential References Not Discussed: The related work should be updated with some more recent works. For example, though the authors use qwen2.5-code in their experiments, they do not mention the qwen2.5-code in the related work. Some recent works like repo-level code generation (like [SWE-bench](https://www.swebench.com/)) or oi-competition (like [deepseek-r1](https://deepseek.ai/)) should be mentioned. There are some work talking about the CoT and the code like [PAL](https://arxiv.org/abs/2211.10435) and [code-prompt](https://arxiv.org/abs/2305.18507) using code as CoT to solve the math problem. These work are very related to the paper and should be mentioned in the related work. Other Strengths And Weaknesses: I believe one of the strengths of the article is its extensive experimental content. However, this also raises some concerns for me, as it may be a sign of insufficient core contributions in the paper. This is especially true when considering that many of the experiments in the main text seem somewhat redundant or overly detailed. I think the author should reorganize the three-page discussion more carefully, clearly explaining the motivation, results, and relationships of each experiment. Particularly, if certain experiments are not tightly linked to the conclusions, they should either be removed or placed in the appendix. Based on the current conclusions of the paper, I feel that the contribution falls slightly short of what would be expected from a complete conference paper. The overly detailed experiments, in my opinion, do not add significant value to the overall contribution. Other Comments Or Suggestions: See aboves. Questions For Authors: There are two main concerns. First, to what extent can the conclusions of the paper hold when the training scale is sufficiently large and the model's capabilities are strong enough? This is crucial as it will determine how the conclusions of the paper impact the training and application of related models. Second, regarding the substantial amount of discussion included in the paper, I would like the author to justify its necessity, explaining how these discussions contribute to the core contributions of the paper, rather than merely summarizing the experimental results. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We address the concerns as below: Q1: How well do the paper's conclusions hold with large-scale training and a powerful model? R1:**Table A. Extent to a larger scale training dataset:** We construct a large dataset of 800K samples. We train Llama-3.1-8B and compare it with DeepSeek-R1-Distill-Llama-8B on EvalPlus: | | HumanEval Base | HumanEval Plus | MBPP base | MBPP Plus | Avg. | | ---------------------- | -------------- | -------------- | --------- | --------- | ------------- | | DS-R1-Distill-Llama-8B | 76.82 | 70.73 | 71.42 | 59.52 | 69.62 | | Code follow CoT | 74.39 | **70.12** | 67.98 | 55.55 | 67.01 | | Code precede CoT | **76.82** | 69.51 | **74.86** | **62.69** | **70.****97** | **Table B. Extent to pre-training stage:** We take "the-stack" as the source seed to synthesize the dataset of pretraining. We perform experiments on DeepSeek-Coder-6.7B-Base: | | HumanEval Base | HumanEval Plus | MBPP base | MBPP Plus | Avg. | | ---------------- | -------------- | -------------- | --------- | --------- | --------- | | Code follow CoT | 61.58 | 55.48 | 75.13 | 60.31 | 63.12 | | Code precede CoT | **69.51** | **64.63** | **77.24** | **63.22** | **68.65** | **Table C. Extent to more powerful models** We conduct experiments with a larger LLM, Qwen2.5-Coder-Base-32B. The results are reported as follows | | HumanEval Base | HumanEval Plus | MBPP base | MBPP Plus | Avg. | | ---------------- | -------------- | -------------- | --------- | --------- | ----- | | Code follow CoT | 85.98 | 79.27 | 87.83 | 71.69 | 81.19 | | Code precede CoT | 86.59 | 79.27 | 89.15 | 76.19 | 82.80 | The results in Tables A, B, and C show that our conclusion, "CoT following the code can work better," is held when scaling up the training dataset and model size. Q2: How can the conclusion be applied in practice, and extensive discussions? R2: A key contribution is that using the "code precedes CoT" pattern outperforms the "code follows CoT" approach: (1) it establishes a more effective paradigm for integrating CoT with code during training. (2) During inference, it can facilitate early stopping by generating only the code, thereby reducing computational cost and latency. All discussions and experiments are designed to support our main conclusion. We will reorganize them. Q3: Additional questions raised in the comments: Q3.1: Performance across various task difficulties (such as various code length): R3.1: We took your suggestion and reclassified the questions' difficulty according to the length of the output answers: | | Length < 500 | 500 < Length < 1000 | 1000 < Length < 1500 | Length > 1500 | | ---------------- | ------------ | ------------------- | -------------------- | ------------- | | Code follow CoT | 0.0 | 20.17 | 10.64 | 0.0 | | Code precede CoT | 25.0 | 29.20 | 19.23 | 9.09 | And DeepSeek-Coder-Instruc-6.7B got only 9.1 on Leetcode Hard with the same base. Q3.2: Nature of the phenomenon & Q3.3: Training data and process: R3.2 & R3.3: These questions have been addressed in R1, which shows that our conclusion still holds with a more powerful model and a larger training set. Q3.4: Bias from synthetic CoT. R3.4: We provide additional experiments where the CoT is generated first. We directly use educational instructions from OpenCoder to synthesize the CoT without the code and hold the same conclusion. | | HumanEval Base | HumanEval Plus | MBPP base | MBPP Plus | Avg. | | ---------------- | -------------- | -------------- | --------- | --------- | ----- | | Code follow CoT | 62.19 | 54.87 | 71.95 | 57.93 | 61.74 | | Code precede CoT | 68.29 | 60.36 | 78.57 | 62.16 | 67.34 | Q3.5: Variance Analysis. R3.5: We adhere to standard settings using sufficiently large model and dataset sizes, which ensures stable experimental results even without including variance. Furthermore, we have presented extensive experimental results across various configurations in Appendix G. We appreciate your suggestion and plan to report the variance in a future version. Q3.6: Explain the o1-like model is better than general ones and the relation with your conclusion. R3.6: A proper CoT process is crucial for deriving accurate answers, particularly in mathematical and logic domains. However, limited research link CoT and Code for code generation. We argue that using code preceding CoT is an effective data paradigm. Well-designed, high-quality code should be recognized as a valid form of CoT. We checked open-source datasets generated by R1 and confirmed the existence of this phenomenon. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed explanation. I still have a few questions and comments: **Code Length and Task Difficulty**: The length of the code is only one aspect of task difficulty. I believe the nature of the task itself is more important. For instance, a competitive programming problem (OI) is generally more challenging than some lengthy data processing codes, even though the former tends to be much shorter in length. The authors should consider that code generation is a very broad field and ensure that their conclusions are not restricted to a single dataset. Generally speaking, code is a modality but not a task, just like natural language. Just as in natural language, translating a sentence is completely different from solving a math problem. “Translating some instruction to code” or answering some common problems is completely different from creative code writing that requires experimentation and thinking. Additionally, even when using code length as a metric for task difficulty, the length of the "ground truth" or "reference" code would be a better measure than the length of the generated code, as there are potential confounders. For example, longer outputs could result from the model’s verbosity rather than the inherent difficulty of the task. I believe the counterintuitive phenomenon where shorter code performs worse than longer code may be due to such confounders. **Dataset Size and Upper Bound**: While 800k samples is a large dataset, I am uncertain whether it is large enough to effectively measure the upper bound of these two methods. As I mentioned earlier, coder models are generally trained on billions of tokens, and this is also true for CoT models. I think the results in Table A are insufficient to conclusively demonstrate that code-precede CoT outperforms code-precede. It’s still possible that code-precede CoT is simply easier for the model to learn and may perform better on smaller datasets. As evidence, we see that the performance difference between code-precede and code-precede CoT has been reduced from 6% to 4% on average, and the code-follow CoT method outperforms code-precede CoT as the training dataset grows. The same trend is evident in Table C, where the gap significantly shrinks compared to the 7B models. This raises questions about the conclusions for stronger models. ** Significance Analysis**: I want to emphasize that reproducing analysis is a critical component of any research, particularly when considering the shrinking gap between the model and the dataset, as mentioned earlier. Ignoring statistic analysis can seriously undermine the credibility of research in the field, as it may lead to misleading results. I encourage the authors to follow established best practices in this regard, such as the [AAAI checklist](https://aaai.org/conference/aaai/aaai-25/aaai-25-reproducibility-checklist/), which includes items like: >This paper states the number of algorithm runs used to compute each reported result. (yes/no) > >Analysis of experiments goes beyond single-dimensional summaries of performance (e.g., average; median) to include measures of variation, confidence, or other distributional information. (yes/no) > >The significance of any improvement or decrease in performance is judged using appropriate statistical tests (e.g., Wilcoxon signed-rank). (yes/partial/no) Similarly, the [ACL checklist](https://aclrollingreview.org/responsibleNLPresearch/) advises: >Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Without repeated experiments and statistical analysis, I am unsure whether the observed gaps are statistically significant. --- Reply to Comment 1.1.1: Comment: Thanks for the helpful response! We will summarize your concerns and provide our responses below: 1. **Dataset Size and Upper Bound**: We use an 800k training dataset in the additional experiment (see Table A), following the configuration of DeepSeek-R1, which also uses 800k SFT data for DeepSeek-R1-Distill-Llama-8B. Thus, this setting is both reasonable and large enough to effectively validate our conclusions. Furthermore, the significance analysis results show a statistically significant improvement from our method, confirming that the amount of training data is adequate. In addition, there is no solid evidence that the improvement converges with the scale of the training dataset and the size of the model. Although the results may show some trends due to the capability of the model, the significance analysis has shown that the benefit of our method remains robust as the dataset and model scale. 2. **Code Length and Task Difficulty**: We take your suggestion and classify the questions' difficulty according to the length of the reference code: | | Length < 500 | 500 < Length < 1000 | 1000 < Length < 1500 | Length > 1500 | | ---------------- | ------------ | ------------------- | -------------------- | ------------- | | Code follow CoT | 21.42 | 13.26 | 11.11 | 0.0 | | Code precede CoT | 42.85 | 17.34 | 27.77 | 33.33 | As for competitive programming problem, we choose LiveCodeBench with difficulies, collecting problems from contests across competition platforms, like CodeForces: | | Easy | Medium | Hard | | ---------------- | ----- | ------ | ---- | | Code follow CoT | 45.19 | 9.51 | 0.74 | | Code precede CoT | 49.67 | 9.72 | 1.04 | 3. **Significance Analysis:** We fully agree about the importance of significance analysis, and conduct a series of experiments to validate our conclusions. We evaluate the performance of the model on the EvalPlus benchmark and use t-test to show the statistical significance. - We run five experiments on the synthetic dataset with different random seeds and fine-tune the DeepSeek-Coder-6.7B-Base model: | Seed | 0 | 1 | 2 | 3 | 4 | Avg. | | ------------- | ----- | ----- | ----- | ----- | ----- | ---------- | | $C_{w/o}$ | 69.19 | 70.06 | 70.34 | 70.48 | 70.83 | 70.17±0.62 | | $C_{follow}$ | 64.88 | 66.54 | 66.03 | 66.38 | 65.84 | 65.93±0.65 | | $C_{precede}$ | 71.49 | 71.18 | 71.82 | 71.14 | 70.96 | 71.31±0.34 | The p-values (0.0069, 0.0000) are < 0.05, showing that $C_{precede}$ has a significant performance improvement over the baselines $C_{w/o}$ and $C_{follow}$. - We run five experiments on the synthetic dataset with different random seeds and fine-tune the Llama-3.1-8B-Base model: | Seed | 0 | 1 | 2 | 3 | 4 | Avg. | | ------------- | ----- | ----- | ----- | ----- | ----- | ---------- | | $C_{w/o}$ | 60.47 | 60.98 | 60.6 | 60.61 | 59.84 | 60.50±0.41 | | $C_{follow}$ | 58.79 | 59.38 | 60.1 | 59.1 | 59.65 | 59.40±0.51 | | $C_{precede}$ | 61.68 | 60.96 | 60.67 | 62.01 | 61.23 | 61.30±0.54 | The p-values (0.0290, 0.0004) are < 0.05, showing that $C_{precede}$ has a significant performance improvement over the baselines $C_{w/o}$ and $C_{follow}$. - We take **GPT-4o-2024-08-06** as another teacher model to synthesize the dataset again. We run five experiments on the new dataset with different random seeds and fine-tune the DeepSeek-Coder-6.7B-Base model: | Seed | 0 | 1 | 2 | 3 | 4 | Avg. | | ------------- | ----- | ----- | ----- | ----- | ----- | ---------- | | $C_{w/o}$ | 70.76 | 70.89 | 69.96 | 71.51 | 71.2 | 70.86±0.58 | | $C_{follow}$ | 65.95 | 66.98 | 66.67 | 65.93 | 66.39 | 66.38±0.46 | | $C_{precede}$ | 71.33 | 72.4 | 72.18 | 71.96 | 72.58 | 72.09±0.48 | The p-values (0.0068, 0.0000) are < 0.05, showing that $C_{precede}$ has a significant performance improvement over the baselines $C_{w/o}$ and $C_{follow}$. - We directly use educational instructions from OpenCoder to synthesize the dataset without giving the reference code. We run five experiments on the new dataset with different random seeds and fine-tune the DeepSeek-Coder-6.7B-Base model: | Seed | 0 | 1 | 2 | 3 | 4 | Avg. | | ------------- | ----- | ----- | ----- | ----- | ----- | ---------- | | $C_{w/o}$ | 70.5 | 70.45 | 70.04 | 70.36 | 70.54 | 70.38±0.20 | | $C_{follow}$ | 66.29 | 66.94 | 66.13 | 67.44 | 65.54 | 66.47±0.74 | | $C_{precede}$ | 71.17 | 72.34 | 70.93 | 71.23 | 71.84 | 71.50±0.58 | The p-values (0.0033, 0.0000) are < 0.05, showing that $C_{precede}$ has a significant performance improvement over the baselines $C_{w/o}$ and $C_{follow}$.
Summary: This paper primarily investigates how chain-of-thought reasoning affects code generation performance. The paper first constructs a dataset of 50k pairs for code generation. A series of experiments is then conducted to investigate how the presence and position of a chain-of-thought affect resulting code generation performance. On HumanEval, BigCodeBench, and LiveCodeBench; the authors find that the standard procedure of having code follow the CoT is in many cases substantially worse than having the CoT follow the code. A series of experiments with different model families (DeepSeek, Llama, Qwen-Chat/Coder) show similar results. There are a large number of additional experiments ablating and studying this effect in various settings. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: I checked both Section 5 and Section 6. Supplementary Material: No. Relation To Broader Scientific Literature: I do not think the specific research question addressed by the authors has been explored by prior work. I think the closest papers to this paper are papers like Program-of-Thoughts, which demonstrate that reasoning in code can be superior to reasoning in natural language. A common assumption in most reasoning papers is that CoT should come before the final answer, but I don't think any paper has shown the counterintuitive empirical result that the CoT is better off coming after the code. Essential References Not Discussed: None. Other Strengths And Weaknesses: I think the main finding of the paper is very interesting and throws into question the useful of chain-of-thought for code generation. In addition, the experiments are well done, and there is a comprehensive discussion section that includes a large number of ancillary results. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for helpful comments! We appreciate the reviewers for pointing out some other reference in related work, and we promise that these relevant references will definitely be included in the future revision.
Summary: The paper argues that in the context of fine tuning a code generation LLM, appending the CoT after the code solution works better compared to the typical setting of prepending it. In order to show this they generate 50k code generation problems with code solutions and CoTs. Experimentally, they demonstrate SFT on training data with CoT preceding code degrades performance, while CoTs following code results in improvements. Claims And Evidence: The primary claim in the paper is that the best order for a CoT coding SFT dataset is: prompt -> code -> CoT. The paper provides ample evidence that this applies to the specific dataset the authors created as part of this submission. This is a caveat that should be better highlighted and discussed in the paper. In particular, the question remains whether the findings generalize to SFT data with CoTs that are generated through a different template or generally with a different process. Methods And Evaluation Criteria: Appending CoTs to the code instead of prepending it is sensible when the CoT is viewed as an explanation. The dataset used for the main results (LiveCodeBench) is the current gold standard. Theoretical Claims: no theory Experimental Designs Or Analyses: Some ablations in the discussion section only seem tangentially related and could benefit from some further explanation: - context attention weight plots: what's the x axis? why's the split between code and CoT the same region between left and right? How come both split between NLP and Code/CoT are equally large? - layer gradient norm: what's the significance of this result? Why does it matter that the gradients of the two datasets have different magnitudes? how does that compare to other datasets where the order of the CoT isn't relevant or works better in the standard order? - inconsistent data analysis, adversarial code robustness: how are these relevant to the main claim of the paper? Supplementary Material: I've reviewed appendix A,B,C Relation To Broader Scientific Literature: Currently, inducing CoTs before the solution is the predominant method to make use of extra tokens and also the way in which CoT SFT data is constructed. This paper challenges this approach in the context of code generation. Essential References Not Discussed: not that I'm aware of Other Strengths And Weaknesses: strengths: - the paper points out an interesting result in the context of SFT for code generation - the experimental results and analysis are quite extensive weaknesses: - more care and discussion should have been devoted to the relation between the specific training dataset and the experimental findings. In particular, appending CoT to code appears to only slightly improve over code without CoT (if at all), while CoT before code appears to significantly degrade the performance. the latter result is surprising and can benefit from more analysis and discussion Other Comments Or Suggestions: the Figure 1 legend is hard to read. maybe make the lines thicker and add the symbols to the legend? Questions For Authors: My primary concern is that it's unclear whether these findings will generalize to other training datasets. As mentioned in the previous comments, I'd like to understand: 1. will other training data distributions yield the same results or is there a specific characteristic for the CoTs that make them more useful when appended? 2. why does appending the CoT degrade the performance so significantly? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the helpful comments! We will summarize your concerns and provide our responses below. We explored many other training data distributions: 1. **different teacher model**: We take **GPT-4o-2024-08-06** as another teacher model to synthesize the CoT&Code dataset again. We perform experiments on DeepSeek-Coder-6.7B-Base using the methods mentioned in the paper and evaluate the model on EvalPlus with the following results: | | HumanEval Base | HumanEval Plus | MBPP base | MBPP Plus | Avg. | | ------------- | :------------- | -------------- | --------- | --------- | ----- | | $C_{w/o}$ | 71.95 | 65.85 | 77.77 | 67.46 | 70.76 | | $C_{follow}$ | 65.24 | 59.14 | 76.45 | 62.96 | 65.95 | | $C_{precede}$ | 72.56 | 66.46 | 78.57 | 67.72 | 71.33 | 2. **different training stage**: We take **the-stack** dataset from pre-training stage to synthesize the dataset for the pre-training stage. We fine-tune the models and evaluate them with the same setting in **(1)**: | | HumanEval Base | HumanEval Plus | MBPP base | MBPP Plus | Avg. | | ------------- | :------------- | -------------- | --------- | --------- | ----- | | $C_{w/o}$ | 68.29 | 61.58 | 76.98 | 62.43 | 67.32 | | $C_{follow}$ | 61.58 | 55.48 | 75.13 | 60.31 | 63.13 | | $C_{precede}$ | 69.51 | 64.63 | 77.24 | 63.22 | 68.65 | 3. **different synthetic template**: We direct use educational instructions from OpenCoder to synthesize the dataset without giving the reference code. We fine-tune the models and evaluate them with the same setting in **(1)**: | | HumanEval Base | HumanEval Plus | MBPP base | MBPP Plus | Avg. | | ------------- | :------------- | -------------- | --------- | --------- | ----- | | $C_{w/o}$ | 67.07 | 60.97 | 76.71 | 60.31 | 66.27 | | $C_{follow}$ | 62.19 | 54.87 | 71.95 | 57.93 | 61.74 | | $C_{precede}$ | 68.29 | 60.36 | 78.57 | 62.16 | 67.35 | 4. **different dataset source**: We take the code generation dataset synthesized with Deepseek-R1 from the open source website, and directly treat the think and output of R1 as CoT and result, respectively. We fine-tune the models and evaluate them with the same setting in **(1)**: | | HumanEval Base | HumanEval Plus | MBPP Base | MBPP Plus | Avg. | | ------------- | :------------- | -------------- | --------- | --------- | ----- | | $C_{w/o}$ | 64.63 | 56.70 | 71.16 | 62.43 | 63.73 | | $C_{follow}$ | 56.09 | 51.82 | 64.55 | 55.02 | 56.87 | | $C_{precede}$ | 64.02 | 57.31 | 71.69 | 62.43 | 63.86 | 5. **CoT precedes code degrading performance**: In this case, the model has to learn the teacher's reasoning without prior knowledge. This step tends to be more difficult for the base model because natural language distribution is more diverse. In particular, the base model often struggles to learn directly the thought generated by another strong model. 6. **discussion of attention weight**: The length of each sequence (instruction, code, and CoT) is different, we first extract the attention matrix of the last layer and then partition it into 3×3 sub-blocks using a bilinear 2D interpolation method. We normalize the attention matrix across the validation set to ensure that the attention weights are comparable across rows. Therefore, the x-axis of the resulting plot represents the sequential input of tokens from the concatenated input, with its division into three equal segments reflecting the equal-sized attention weight matrices between the instruction, code, and chain-of-thought parts. 7. **discussion of layer gradient norm**: The gradient norms of different layers in the right plot are generally smaller and show a smoother, more gradual transition between layers. Previous research suggests that smooth gradient norm distributions during training can reduce the risk of unstable optimization, resulting in more stable and consistent parameter updates. 8. **discussion of inconsistent data**: This discussion aims to investigate the conditions under which high-quality code data can serve as a CoT. Our results in the inconsistent data experiment suggest that self-consistency alone is not a reliable indicator of code quality since the teacher model may generate the wrong test cases, and the generated code may not be incorrect. 9. **discussion of adversarial code**: The experiments investigate whether lexical information affects the quality of the code. Our results show that model performance suffers significantly under such perturbations, suggesting that current models often rely heavily on lexical representations for semantic understanding.
null
null
null
null
null
null
A Theoretical Study of (Hyper) Self-Attention through the Lens of Interactions: Representation, Training, Generalization
Accept (poster)
Summary: The paper presents a theoretical study of self-attention mechanisms, specifically focusing on their representation, training, and generalization capabilities through the lens of mutual interaction among entities. The authors introduce a novel perspective called "interacting entities," demonstrating that a single layer of linear self-attention can effectively represent, learn, and generalize functions that capture pairwise interactions, even in out-of-distribution scenarios. They also propose two extensions: HyperFeatureAttention, which captures interactions between features, and HyperAttention, which models higher-order interactions among multiple entities. Claims And Evidence: The claims made in the paper are supported by a combination of theoretical proofs and empirical experiments. The authors provide evidence showing that self-attention can generalize well across different distributions and sequence lengths. Methods And Evaluation Criteria: The proposed methods, including HyperFeatureAttention and HyperAttention, are well-grounded in theoretical constructs and are evaluated against established benchmarks. The evaluation criteria make sense for the tasks at hand, focusing on representation and generalization across different scenarios. Theoretical Claims: I did not check the proofs. Experimental Designs Or Analyses: The experimental designs are sound and validate the theoretical findings. Supplementary Material: No. Relation To Broader Scientific Literature: The key contributions of the paper build on existing theories of self-attention and representation learning. The authors relate their findings to prior works on Transformers and attention mechanisms, positioning their contributions within the broader context of machine learning literature. Essential References Not Discussed: I think there are many attention variants in recent years, but this paper does not mention much. Other Strengths And Weaknesses: The paper exhibits strengths in originality and clarity, providing a fresh perspective on self-attention mechanisms while effectively communicating complex ideas. However, some sections could be more concise, and additional examples could enhance the understanding of the proposed extensions. Other Comments Or Suggestions: NA. Questions For Authors: 1. How could the proposed HyperFeatureAttention and HyperAttention be used in MARL settings? 2. Are there any potential drawbacks or challenges when applying your methods to larger datasets or more complex tasks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *RESPONSE 1* (About the references): In the introduction we referenced several works on "theoretically understanding attention" but we agree that attention modifications could be discussed more. In the final version of the paper we will mention these approaches, i.e, the papers such as (DEBERTA) https://arxiv.org/pdf/2006.03654; (Improving sampling by modifying the effective diffusion) https://arxiv.org/abs/2410.05258; (Selective Attention) https://arxiv.org/pdf/2410.02703. In addition, we will expand the discussion with the "mechanistic interpretability" literature such as (Tranformer Circuits) https://transformer-circuits.pub and previous studies attempted to understand and modify attention and modify attention. Also, We will briefly mention NA2Q (ICML, 2023) and other relevant methods in the revised version of our paper. *RESPONSE 2* (Use of HyperFeatureAttention and HyperAttention in MARL Settings):Our work views tokens as interacting entities—a perspective that naturally encompasses agents in multi-agent reinforcement learning (MARL). In MARL, interactions are often more complex than simple pairwise relationships, and our proposed extensions are well suited to address such challenges. In particular, HyperAttention is designed to capture higher-order (e.g., three-way or n-way) interactions, which are common in collaborative or competitive multi-agent environments where an agent’s outcome may depend on the joint behavior of multiple agents. Similarly, HyperFeatureAttention enables the coupling of different feature-level interactions (such as spatial positions, velocities, or task-specific attributes), providing a more expressive representation of agent dynamics. Although our current experiments primarily focus on controlled settings, the theoretical framework and empirical evidence presented in Sections 5 and 6 of our paper indicate that these modules can be naturally extended to MARL scenarios. In future work, we plan to investigate and validate these extensions in MARL benchmarks, which we believe will further demonstrate the versatility and potential impact of our approach in such applications. *RESPONSE 3* (Potential Drawbacks or Challenges for Larger Datasets and More Complex Tasks): We strongly suggest you to refer RESPONSE 6 of our reply to reviewer xG3k where we summarize the potential drawbacks in terms of complexity of models and how to overcome them.
Summary: This paper proposes a theoretical framework viewing self-attention tokens as interacting entities. The key theoretical findings include proving the representational power of single-layer linear self-attention for pairwise interactions, demonstrating convergence of training under mild assumptions, and establishing generalization guarantees to both within-distribution and out-of-distribution settings. Additionally,  the authors introduce two innovative extensions: HyperFeatureAttention, designed to capture couplings between different feature-level interactions, and HyperAttention, which models higher-order (multi-entity) dependencies. Claims And Evidence: The authors support their claims with rigorous theoretical proofs and empirical validation using a simplified “colliding agents” scenario. However, as the experimental results are confined to synthetic environments, further validation in more complex, real-world scenarios is necessary to fully support the claims. Methods And Evaluation Criteria: The theoretical methods employed are appropriate and contribute notably towards improving interpretability and potential performance of self-attention mechanisms. However, the evaluation primarily uses synthetic environments, and extending experiments to more complex real-world scenarios or well-established benchmark datasets could strengthen empirical claims. Theoretical Claims: I reviewed the proofs in the Appendices, which are the key contributions of this theoretical paper.  The proofs  are basic sound. However, clarity regarding symbolic definitions and intuitive explanations of mathematical formulations could be further improved. Experimental Designs Or Analyses: The experimental design is sound but might not be adequate. Additional experiments in more complex or empirical scenarios are lacking, which could further enhance confidence in the generalizability of the findings. Supplementary Material: Yes. I reviewed the proofs for the theorems in the main text since they are the primary contributions. Relation To Broader Scientific Literature: This paper addresses gaps such as the interpretability of the attention mechanism and out-of-distribution generalization. Essential References Not Discussed: The paper cites many relevant studies, but might lack a detailed discussion of recent advances in Transformer interpretability. A more detailed comparison with current studies on Transformer interpretability would better highlight the contributions and novelties of this paper. Other Strengths And Weaknesses: **Strengths**: - The authors introduce a novel perspective for understanding the self-attention mechanism. - The generalization of attention mechanisms to novel HyperAttention modules is innovative and well-motivated. - The authors provide clear examples to demonstrate the ability of self-attention to capture mutual interactions. **Weaknesses**: - Complexity: While HyperAttention may capture higher-order interactions, this comes at the cost of increased computational complexity. The necessity of such complexity is questionable, especially given some research suggesting that higher-order interactions can be approximated using weighted pairwise interactions. - Limited empirical validation: The paper primarily focuses on theoretical contributions, with experimental results limited to controlled synthetic environments. The authors do not provide empirical evidence demonstrating HyperAttention’s superiority in practical tasks. More diverse empirical demonstrations would enhance the credibility of the findings. - Presentation: The presentation of theorems and formulations could be enhanced with clearer definitions and intuitive explanations. Other Comments Or Suggestions: - Lack of clear symbol explanations: For example, Eq. (3) and $f_a^h$ (Line 279 right column). - Formatting issues: Line 228 contains "let Let"; Line 232-233 uses wrong punctuation. Overall, while this is an interesting and meaningful study, it feels more like an intermediate-stage work. The authors provide rigorous mathematical foundations, but the lack of empirical validation beyond toy examples limits its impact. Improved readability, more detailed and intuitive theoretical explanations, and broader experimental validation would elevate the paper’s contribution. Questions For Authors: - Q1: How does the complexity of the HyperAttention scale when capturing very high-order interactions (e.g., four-way or higher)? Would this limit practical applicability? Can HyperFeatureAttention and HyperAttention mechanisms practically scale to large datasets? - Q2: How sensitive are your theoretical guarantees to changes in embedding dimension (specifically when d < N)? Could empirical experiments be performed to quantify this sensitivity? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: LINK (please open in a private window): https://drive.google.com/file/d/1lJ3HYR6i02jpm3CAxg4cu4J6icQXubJ8/view?usp=share_link *RESPONSE 1* (On the experiments for "further validation in more complex, real-world scenarios", "more complex or empirical scenarios", "well-established benchmark datasets"): For self-attention, our main goal is to provide rigorous theories for the understanding of the existing self attention mechanism. Consequently, our current experiments primarily validate the theoretical framework of linear self-attention, offering strong empirical support as opposed to conducting large-scale experiments on self-attention (which has already done on many literature as we cited). That being said, we have done large-scale experiments such as famous many body problem where we predict trajectories of bodies (please refer to LINK) *RESPONSE 2* ("clarity regarding symbolic definitions and intuitive explanations of mathematical formulations could be further improved" and "presentation of theorems and formulations ... with clearer definitions and intuitive explanations"): We made the symbolic definitions more distinguishable (for instance we will replace $s^{(n)}(i)$ "index to corresponding element mapping" function with $\mathcal{X}^{(n)}(i)$ since the former version may be confused with $\mathbf{s}^{(n)}$ "count vector") and we further improve the mathematical formulations. Further, in the final version of the paper we will provide clearer definitions and intuitive explanations *RESPONSE 3* (on the references on "Transformer interpretability"): Please refer to RESPONSE 1 of our reply to reviewer Ct39. *RESPONSE 4* (on the concerns and questions on complexity and practicality of Hyper(Feature)Attention): please refer to the "RESPONSE 6 (Comparison Between Attention Models)" part of our part of our reply to Reviewer xG3k. *RESPONSE 5* (Regarding empirical evidence for Hyper(Feature) Attention): Please refer to *RESPONSE 3* in our reply to reviewer N655. *RESPONSE 6* (to Question 2): When we only remove the $d<N$ assumption, we start to get nonzero errors i.e. approximations to the exact solutions to the task (which is the practical real world situation). This is explained in both at the end of Section 3 "*Why d=N?*" part and in Theorem B.4. Shortly, "... Our goal is to understand how self attention captures mutual interactions, and d = N ensures orthogonal domain embeddings, yielding an exact and transparent representation. Compressing to $d < N$ is perpendicular to this focus and can be addressed separately using standard techniques... Starting with $d = N$ allows us to establish theorems that elucidate how self-attention captures pairwise interactions, while reducing to $d < N$ merely introduces a small approximation gap without altering the core theory. For completeness, we provide.. (Theorem~B.4)". That is being said, we runned experiments to quantify this sensitivity on the same colliding agents environment with $d<N$. As a result, we had nonzero errors (train, test, out of distribution). As the embedding dimension approached to the dictionary size the errors approached to zero. Although it is an important insight, that discussion is orthogonal to the theories for this paper,so we left those results for a future work. However, we may add them to the final version of the paper. *RESPONSE 7*: Lastly, we are thankful for pointing out the typos, formatting issues and symbol explanation issues. Considering the length of the paper, unfortunately we made those small errors which are easy to fix and will be totally fixed in the final version of the paper.
Summary: The paper studies the self-attention mechanism that is central to many of today's ML models (such as in NLP, computer vision and multi-agent systems). Instead of Transformers (which involve several layers of multi-head attention and other components), the paper examines a simplified model that consists of a unique linear self-attention head (linear = no softmax). Most application examples are toy multi-agent systems. The paper describes some mathematical properties of this simplified self-attention model and verifies them empirically; additionally it proposes two generalisations of self-attention, called HyperFeatureAttention and HyperAttention. The mathematical properties are that, given some assumptions (claimed to be mild in practice): * Self-attention can represent any pairwise interaction of tokens (theorem 3.1), * and does so more efficiently than any fully-connected neural network (theorem 3.2); * gradient descent converges to zero training error (theorem 4.4), * which also means perfect fit on a test set from the same distribution (theorem 4.8), * and perfect fit on sequences that are longer or shorter than in the training set (theorem 4.10). The two proposed extensions of self-attention are supposed to generalise the pairwise interaction property: * HyperFeatureAttention can represent interactions between several features of the same two tokens. The authors claim that even two layers of multi-head self-attention cannot do this. * HyperAttention can represent interactions between more than two tokens at a time. ## update after rebuttal Claims And Evidence: The theoretical claims are well supported, both by mathematical proofs and by experiments. An exception is the claim that everything generalises to $d<N$ (see below about theorem B.4). The proposed extensions of self-attention are justified theoretically, but not put into practice. However, it is conceivable that more traditional self-attention architectures can also represent these complex interactions, perhaps in surprising ways. (After all, LLMs use standard self-attention and are very good at modeling language!) An insightful further experiment would be to train and compare toy models with and without Hyper(Feature)Attention. Methods And Evaluation Criteria: The mathematical claims are proven and tested empirically, which is good. For the proposed extensions, see the previous point. Theoretical Claims: I checked the following proofs: * proof of 3.1: no issue * proof of 3.2: correct * proof of B.4: The "for every $\varepsilon$" claim is not proven and in fact seems to be wrong. In the proof the $\varepsilon$ is introduced by $\|B_* -F\| \leq \varepsilon$. But given $F$ and $d$, $B_*$ minimises $\|B_*-F\|$ over all rank-$d$ matrices, so $\|B_*-F\|$ is fixed! So the statement $\|B_* -F\| \leq \varepsilon$ breaks down for any $\varepsilon$ smaller than the fixed $\|B_* -F\|$. The theorem in its present formulation should therefore be removed from the paper. To save the claim that the $d<N$ case is reasonably similar to $d=N$, the authors should reflect carefully on what the Eckart-Young theorem and/or the Johnson-Lindenstrauss lemma actually imply for the question at hand. It may be possible to replace an "always" statement with a "with high probability" statement. * proof of 4.4: in the equations on line 1278 I think $1/B$ should be $2/B$, but this does not affect the validity of the proof (as ultimately it is about when these things are zero). I did not check the rest of the proof in detail. Experimental Designs Or Analyses: Looks good to me, but I did not thoroughly check them. Supplementary Material: Proofs of theorems, see above. Relation To Broader Scientific Literature: The most relevant literature tries to gain a deeper mathematical understanding of Transformers. For example, the paper cites a work that shows Transformers to be Turing-complete. As a part of understanding Transformers, many works also try to understand their training process, i.e., how the training loss converges. Both threads are continued in the theoretical contributions of this paper. Essential References Not Discussed: The paper cites Elhage et al. 2021, which shows that the authors are at least partly aware of the "mechanistic interpretability" literature. While it is only tangentially related to this paper, it could be valuable to consult and cite it more. In particular, the claim that earlier works "often do not delve into interpreting the roles of the learned parameters beyond attention patterns" (l. 73-75 col. 2) seems a bold statement given papers like https://arxiv.org/abs/2412.11965 or even the Elhage paper itself. (These papers describe the idea of expanding the parameters of attention heads in language models to represent token interactions.) Another point about the same sentence: to my knowledge attention patterns are not the state of the art in interpreting the behaviour of attention heads (https://icml.cc/virtual/2024/poster/32735) More generally, surveys on mechanistic interpretability of language models could be valuable resources: https://aclanthology.org/2020.tacl-1.54, https://arxiv.org/pdf/2405.00208, https://arxiv.org/pdf/2501.16496. The submission attempts not only to theoretically understand attention, but also to generalise or modify the mechanism. Previous studies have also attempted this, but none of them is cited here. Their motivations are various and not directly related to the present ones, but I think it would still make sense to briefly mention them. I am aware of the following (there are probably many more): https://arxiv.org/pdf/2006.03654, https://arxiv.org/abs/2410.05258, https://arxiv.org/pdf/2410.02703. Other Strengths And Weaknesses: The theoretical results are really nice! However, I doubt that the assumptions are as mild as the authors claim. I am particularly skeptical of assumption 4.2: In the case of language, some tokens are very rare, and (depending on their joint distribution) it is quite possible that two of these rare tokens never co-occur. As I understand the definitions, this would lead to a column (?) of zeros, contradicting full rank. The proposed generalisations of self-attention are also valuable ideas. But, in my opinion, this does not mean that standard self-attention is as weak as the authors claim. For example, in my understanding, the "skip-trigram "bugs"" (Elhage et al. 2021, quotation marks original) are essentially a theoretical construction that does not seem to occur in real language models. Real language models clearly "know" that "keep ... in mind" and "keep ... at bay" are correct but "keep ... in bay" is not. I assume this is because the attention output of "keep" is not the whole output: the attention head usually also attends to other tokens to some extent, such as the current token "in"; and there is a skip connection that enables direct usage of the embedding of "in" (which may encode bigram statistics, compare Elhage et al.'s section on zero-layer transformers). This questions to what extent HyperAttention is really more useful than more standard mechanisms. But again, whatever may be the case, it is definitely a valuable idea! Finally, I feel that the connection between the theoretical results and the proposed architectural modifications is a bit loose. Both parts would be valuable on their own. If they are to appear in the same paper, the authors should better emphasise how they are connected. Other Comments Or Suggestions: Typos, grammar and clarity: * Footnote 1: what is k and L? * l. 176: "self-self attention" * l. 228: "let Let," * l. 241: "Let ... as" -> "Let ... be" * l. 242: comma -> full stop * l. 268: "converges" -> "converges to" * section 4.1: please state the assumptions before the theorem * l. 248 (col. 2): "than" -> "then" * l. 322: "showed" -> "show" * eq. (3): what is A? I could only guess it when I read appendix E. * l. 309 (col. 2): "Remark completeness" -> "remark for completeness" * l. 370 (col. 2): "a circle of radius R executes" -> "a circle of radius R that executes" * l. 1623: I guess the second H should be an A. Questions For Authors: You have given an example of an interaction that standard two-layer multi-head attention apparently cannot represent, but HyperFeatureAttention can. Conversely, are there interactions that HyperFeatureAttention cannot represent but standard (two-layer multi-head) attention can? This is quite important if we want to know how useful HyperFeatureAttention really is. And whatever the answer, you should talk about it in the paper. Ethical Review Concerns: No concerns. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: *RESPONSE 1* (the concerns on practicality of proposed models and what is their use while self-attention works in NLP): We agree that traditional self-attention architectures can also represent these complex interactions "in surprising ways". However, the main distinction is that the proposed models can represent those with fewer parameters. For instance, it is discussed in Appendix E.1 and Appendix E.2's "a short note on approximate embedding" paragraph. We agree that we need to carry some of those discussions to the main text and make it clear in the main text, which will be done in the revised version. Another point is we do not advocate replacing conventional self-attention with the proposed models. For more details we strongly suggest referring to "RESPONSE 6" part of our reply to reviewer xG3k. *RESPONSE 2* (insightful experiments): As for the small-scale experiments, we have promising preliminary results, please refer to "RESPONSE 3" part of our reply to reviewer N655. *RESPONSE 3* (the concerns on proof of Thm B.4): We agree with your concerns and we think that old version was not clear. Therefore, we completely changed the theorem statement and the proof, keeping the essence the same as old version. In the new version, we use low rank approximations to the matrices that would solve the task with zero error if embedding dimension were large enough. We bound the infinity norm between the output of the linear self attention having $d<N$ and the correct labels. The bound is in terms of singular values of the matrices that would solve the task with zero error if embedding dimension were large enough. Unfortunately, it is not allowed to provide whole theorem and proof with anonymous links. *RESPONSE 4* ("earlier works often do not delve into interpreting the roles of the learned parameters beyond attention patterns") While making that statement we were mostly thinking about the papers not only theoretical but also \emph{mathematical}, with rigorous theories expectations and validations of those experiments. We apologize for the confusion and in final version we will clarify. In more detail, our approach is different from conventional mechanistic interpretability approach that our approach is rigorous/mathematical and our experiments are for validating our predictions. Their approach is like "phenomenology", and our approach is "fundamental". *RESPONSE 5* (About the scepticism on assumption 4.2): In the revised version of the paper we will add a section to the appendix that justifies Assumption 4.2. In short, assuming $\mathbf{s}^{(n)} \in \mathbb{R}^{|\mathcal{S}|}$ are sampled from a distrubution such that covariance of $\mathbf{s}^{(n)}$ is positive definite, we will use matrix Bernstein inequality and Weyl's inequality and prove that the statement in the Assumption 4.2 hold with high probability: for some $\gamma >0$, B is the number of data samples, $\mathbb{P}\left(\mathrm{rank}(\mathbf{S}_{{B}\mu}) \le |\mathcal{S}|\right)\le e^{-\gamma B}$ *RESPONSE 6* (real LLM's not falling to skip trigram bug): We agree that LLMs as a whole do not have such bugs. As you mentioned, the skip connections from earlier layers help to approximately get rid of those problems and by adding more layers, the approximations gets better. In our analysis, we focus on self-attention itself (not a large neural network) and compared it with our new modules (small modules not large networks). We strongly suggest looking at the RESPONSE 6 part of our reply to reviewer xG3k (especially the last paragraph of it), which we will add to the paper. *RESPONSE 7* (On the question about the usefulness of HyperFeatureAttention (HFA)): Both HFA and self-attention have same complexity, so we compared them by parameter count,since even MLPs can represent any function with enough parameters. Our intentially unfair comparison used two-layer multihead self-attention against a single-layer single-head HFA. A fairer comparison (single-layer single-head vs. single-layer single-head) shows HFA captures self-attention. We acknowledge that some interactions require two-layer multihead self-attention, but for more on HFA’s utility, please see “RESPONSE 3” in our reply to reviewer N655. *RESPONSE 8*(connection between theoretical results and the proposed modifications): Various colliding agent example scenarios in Appendix B.1.3 and "Non-Identical Agents Revisited" in Appendix E.1, we introduce how HFA is a natural construction from the theoretical insights on self-attention. Though seemingly unrelated, they were stepping stones. Thank you; we’ll clarify. *RESPONSE 9*: For essential references please refer to "RESPONSE 1" to reviewer Ct39.
Summary: This paper is separated into two broad sections. The first is a theoretical study of linear self-attention, which explores the expressivity and generalisability of a single linear self-attention layer. The second is more empirical, proposing two new architectures based on self-attention. The performance of these models are demonstrated experimentally. Claims And Evidence: Following on with the two broad sections of the paper: The theoretical portion of the work appears very rigorous, with claims well justified and all assumptiosn laid out clearly and justified. While I was unable to robustly verify every proof, due to expertise in this area, I believe what I have checked holds up. I think the experimental section leaves slightly more to be desired. The theory is tested for one indicative problem setting, with two different types of embedding. I may be missing something, but while there is discussion of performance for the 'One-Hot Embedding' problem, I am unable to find these results in the paper. Figure 1 applies to Sinusoidal Embeddings, but showing the theoretical results to allow us to compare would be useful. I think demonstrating these findings on more than one problem setting would also be good, although I appreciate that I lean more on the empirical side and this is a strongly theoretical paper. One claim I did not see justified was in footnote 5: that the approach in this paper has greater representational capacity *while maintaining comparable efficiency* - it is later mentioned that HyperAttention in this paper is $\mathcal{O}(L^3)$, and so it would be good to demonstrate that this does not lead to drastic computational increases compared to the low-rank approximation of prior work. On a slightly separate note, a lot of the appendix is filled with figures which are neither discussed nor referenced in the main text. If these plots provide evidence or information, it would be good to at lesat have an in-text reference for them in the main body of the paper. Methods And Evaluation Criteria: I have sort of discussed this above. I think additional experimentation, using more than one problem setting would be good, but I think it is worth bearing in mind that the key contribution of this paper reads more theoretical than empirical and thus the bar for experimental results should, rightly, be lower. The motivation of the theory is good, and all simplifying decisions and assumptions are well justified. I personally found the introduction of the methods somewhat hard to follow - for a less theoretically-minded person like myself, a short statement explaining, in words, what the HFA and HA layers actually do and how they differ from SA would be helpful. Theoretical Claims: I attempted to verify a number of the proofs, though was unable to check every single one due to the sheer length of the paper and time constraints (@AC, see comment). What I checked seemed to be correct. Experimental Designs Or Analyses: I think additional experimental settings could be good as a means to improve the paper, but as stated I do not think there is a desparate need for that as a principally theoretically-minded paper. The experiment that they do include would benefit from some additional discussion in the main body of the paper, rather than purely relegating that to the appendix, and I am unable to find the results for one-hot embeddings. Supplementary Material: A large part of the paper is included in the appendix only, so I spent some time verifying proofs from the appendix and judging some of the results. Relation To Broader Scientific Literature: The paper seems well researched and placed in scientify literature, although I am not well placed to contextualise this work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: I found most of the paper well written and clear, and feel it is clearly of a high quality. That said, I found a few points that I wanted to raise. - I found the examples used (particularly the contextualising example about alleles) a little bit counterintuitive/out of nowhere. Finding a way to ground an example in language would be good, given it is the field which has arguably seen the largest breakthroughs as a result of self-attention. - It would be nice in the main paper to highlight with a bit more discussion the effect of not having $d=N$, as this introduces some level of compression and thus may violate some of the approximations despite being a more practical system. - I found line 248 ('In short, ...') confusing English. - I found the HyperFeatureAttention notation a bit confusing - what are the curly brackets for, are they just used as normal parentheses? - The frequent tense changes in the paragraph starting 'Setup: Colliding Agents on a Cylindrical Grid.' made the text quite hard to follow. Other Comments Or Suggestions: See above. Questions For Authors: - Can you please explain how you can verify that HyperAttention is comparably efficient to prior work in this area, which uses low rank approximations? - Given this theory, what are the violations that make the expected guarantees not generally true in the real world - as in, why do many systems in practice **not** have zero error and perfect generalisation to new data and completely different lengths? - Can you please explain what the use of HyperFeatureAttention and HyperAttention are over self attention, given the guarantees already present for self-attention? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: LINK (OPEN IN PRIVATE WINDOW): https://drive.google.com/file/d/1lJ3HYR6i02jpm3CAxg4cu4J6icQXubJ8/view?usp=share_link *RESPONSE 1* (the results of one-hot embedding): The results were explained as "negligible error $\Theta(10^{-7})$ on test and out of distribution data". However, we agree and for complenetess we will add the learned parameters to the paper (see LINK). *RESPONSE 2* (additional experiments for theory validation): We experimented the theories on the genotype phenotype mapping task (explained Sec. 3 Example 2 and Appendix B.2) and got same results $\Theta(10^{-7})$ error on test and OOD data.We may add them to paper. *RESPONSE 3* (the figures in the appendix "which are neither discussed nor referenced in the main text"): discussed in Sec.7 as "shown in Figures 3 and 4, these matrices are indistinguishable from the theoretical counterparts" *RESPONSE 5*: Sanford et al. shares our $\mathcal{O}(L^3)$ complexity. The footnote 5 is unnecessary and will be removed. *RESPONSE 6* (Comparison Between Attention Models): We will add the following discussion and the attached table (LINK) to the paper. Letting embedding dim to be $d$, sequence length to be $L$, we compare those models in terms of number of parameters, computational complexity, and abilities. - For self-attention, it captures mutual interactions between entities (cf.Thms 3.1,4.4,4.6,4.8).If it has multiple heads, it can capture summation over mutual interactions between features of the entities. It has ${\Theta}(d^2)$ parameters, and its computational complexity is ${\Theta}(L^2)$. In order to reduce the computational complexity to ${\Theta}(L)$, people came up with approximations called "Linear Attention" (Katharopolus et al 2020, Wang et al 2020). However, despite the name "linear", the method is used to approximate nonlinear selfattention. - HyperFeatureAttention captures couplings of mutual interactions between features (cf. Sec.5, AppendixE). If it has multiple heads, it can capture summation over couplings of mutual interactions between features. Same as self-attention, HyperFeatureAttention has ${\Theta}(d^2)$ parameters, and its computational complexity is ${\Theta}(L^2)$. The same Linear Attention approximations can be applied to HyperFeatureAttention, reducing its computational complexity to ${\Theta}(L)$. Seeing that the main goal of the paper is not this approximation for HyperFeatureAttention, we did not show it explicitly. - As for HyperAttention of order $n$, it captures up to $n^{th}$ order interactions (Refer to Sec.~6 and Appendix~F). If it has multiple heads, it can capture summation over up to $n^{th}$ order interactions between features of the tokens. It has ${\Theta}(d^2)$ parameters, and its computational complexity is ${\Theta}(L^n)$. Using similar Linear Attention approximations, in Appendix~F.3, we explained how to reduce computational complexity of HyperAttention to ${\Theta}(L)$. We will add the requirements on the query and key matrices that if their entries are less than $o(\sqrt[3]{\log L})$ then the infinite norm between the actual outputs and approximate outputs is less than $1/\mathrm{poly}(L)$. - While we do not advocate replacing conventional self-attention with HyperAttention or HyperFeatureAttention, we propose these mechanisms as complementary enhancements. In a typical multi-head architecture, certain heads may employ standard self-attention while others utilize HyperFeatureAttention (of varying orders) or HyperAttention to capture richer, higher-order interactions. Depending on the computational constraints, the HyperAttentions may leverage the linear approximations described in Appendix~F.3. *RESPONSE 7* ("the violations that make expected guarantees not generally true in the real world" and "why dont we have zero test errors and what happens when $d<N$"): When we only have $d<N$ assumption, we start to get nonzero errors i.e. approximations to the exact solutions to the task (which is the practical real world). This is explained in both at the end of Sec. 3 "*Why d=N?*" part and in Thm B.4. Another key assumption in our analysis is the (weak, strong, universal) realizability. Although these conditions may not hold in practical scenarios, implying that a single layer, single-head self-attention mechanism may not fully capture the complexities of real-world tasks, it is important to note that SOTA models typically employ multiple layers of self-attention blocks. Such architectures can approximate the desired functions even when the strict realizability assumptions are violated.Actually, the realizability and d=N assumptions are very similar in essence because when letting d large enough, we simply showed in Appendix B that realizability is satisfied (with several example scenarios such as colliding agents, genotype phenotype mapping, vision task, time series prediction). A thorough investigation of these approximations, lies beyond the scope of this work. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your response, as well as taking the time to generate additional results. Re: One hot, I might be missing something but I am not sure in the PDF where results showing the accuracy (i.e. $10^{-7}$) are. Let me know if there's something I've not seen. Re: additional experiments, I think including these results would be beneficial. Crucially, though, I think these results need to be *shown* rahter than stated without evidence. Re: Figure references, I apologise - this was an oversight on my behalf. Re: complexity, that is good to know, thank you for clearing it up. Re: Comparison, I think that a breakdown of this type would be really helpful for the flow of the paper, probably attached to a paragraph explaining what the comparison means. Re: violations, I think it is fair to recognise that as a theoretical piece of work this will rely on certain assumptions which may not always hold for real-world problems, but that this is OK. I agree with your comments that investigating these approximations goes beyond the scope of this work, which is already quite dense as-is. I do not plan on increasing or decreasing my score - I have recommended acceptance because I think this is a well written paper and answers some interesting questions. That said, it is also quite beyond my domain of expertise and so I am unable to make any strong recommendation due to a lack of confidence in my review. All of that said, well done and good job! --- Reply to Comment 1.1.1: Comment: Thank you for the rebuttal comment. You can find the result for one-hot at line 408 left column.
Summary: The paper introduces a broad theoretical perspective to analyze the capabilities of self-attention mechanisms, particularly focusing on the interactions between entities. The paper extends the traditional pairwise self-attention to higher-order interactions and presents two novel mechanisms: HyperFeatureAttention and HyperAttention. These mechanisms capture feature-level and multi-entity dependencies, respectively. The authors provide a comprehensive theoretical analysis demonstrating how self-attention learns and generalizes mutual interactions, with guarantees for out-of-distribution (OOD) generalization. The paper also includes experiments validating these theoretical claims, showing that the learned parameters align with the theoretical predictions. Claims And Evidence: This paper has given a clear representation and proof. Methods And Evaluation Criteria: The introduction of HyperFeatureAttention and HyperAttention extends the self-attention mechanism by capturing higher-order interactions, a potentially useful contribution for complex multi-agent systems and tasks involving feature dependencies. And it demonstrates that the proposed linear self-attention framework converges to optimal solutions under mild assumptions and generalizes well across varying sequence lengths and tasks. Theoretical Claims: The paper provides a clear and unified framework to interpret self-attention mechanisms through the lens of mutual interactions across various domains. It offers new insights into how self-attention works and generalizes, specifically with respect to out-of-distribution shifts and interaction learning Experimental Designs Or Analyses: Although the theoretical claims are supported by experiments, the experiments mainly focus on simple toy problems (e.g., colliding agents). More real-world and complex experimental setups are necessary to fully assess the robustness and scalability of the proposed methods in practical settings. Supplementary Material: Yes, I have review the the supplementary material. Relation To Broader Scientific Literature: This paper has made a broad literature. Essential References Not Discussed: Regarding high-order interaction among multi-agent reinforcement learning, an important work should be considered in the introduction or related works section: NA2Q: Neural Attention Additive Model for Interpretable Multi-Agent Q-Learning, ICML, 2023. Other Strengths And Weaknesses: While the introduction of HyperFeatureAttention and HyperAttention is conceptually valuable, the increase in complexity might hinder the practical applicability of these extensions for large-scale problems. The computational cost and potential trade-offs between expressiveness and efficiency are not fully addressed. Other Comments Or Suggestions: The paper emphasizes the interpretability of the learned parameters, but in some cases, the learned parameters (especially with sinusoidal embeddings) lack clear, intuitive interpretations. Further exploration on how to make these parameters more interpretable in complex settings would strengthen the practical relevance of the work. Questions For Authors: 1. The HyperAttention mechanism captures higher-order dependencies, but its computational cost scales cubically. How do you plan to optimize this mechanism for large-scale applications? Are there approximations or heuristics you are considering to make it more efficient in real-world settings? 2. While the experiments validate your theoretical findings on simple tasks like colliding agents, how does your framework perform on more complex and realistic datasets (e.g., NLP tasks, computer vision, multi-agent reinforcement learning)? Can you extend the experiments to such tasks to further validate the scalability and generalization of your methods? 3. While you discuss the interpretability of learned parameters, some experiments (especially those with sinusoidal embeddings) reveal that the learned parameters do not always align with intuitive expectations. Could you provide further insights on how you plan to improve interpretability in more complex settings or for larger models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: LINK (please open in a private window): https://drive.google.com/file/d/1lJ3HYR6i02jpm3CAxg4cu4J6icQXubJ8/view?usp=share_link *RESPONSE 1* (On Computational Complexity of Models and Response to Question 1): In our appendix, we showed that by leveraging linear self-attention approximations, the computational complexity of HyperAttention with even softmax activation function can be reduced to $\mathcal{O}(L)$. We suggest looking at "RESPONSE 6" part of our reply to reviewer xG3k, where we detailly answered this question and what kind of revisions we will do. *RESPONSE 2* (to Question2): For self attention, our main goal is to provide rigorous theories for the understanding of the existing self attention mechanism, as opposed to conducting large-scale experiments (which has already been done on many literature as we cited). Our current experiments validate the theoretical framework of linear self-attention. *RESPONSE 3* (Regarding emprical evidence for Hyper(Feature)Attention): We agree experimental verification of HFA and HA are important. As this work is primarily theoretical, we now present small-scale experiments while working on large-scale versions in a following paper. In our small-scale experiments, we evaluate next-token prediction under two setting. 1) 1-layer 1-head self-attention, HyperFeatureAttention with context window of 1024 but all of the rest of the hyperparemeters are the same as GPT-2 paper (e.g. embedding dimension is 768 etc.)2) 3 layer 3 head self-attention, HyperFeatureAttention experiments with all of the hyperparameters the same as GPT-2 (context window of 1024, embedding dimension of 768 and so on). 3) 1-layer 1-head self-attention, HyperAttention, HyperFeatureAttention with context window of 256 but all of the rest of the hyperparemeters are the same as GPT-2 paper (e.g. embedding dimension is 768 etc.) In both experiments the networks we compare have the same number of parameters. Self-attention and HyperFeatureAttention networks have the same computational complexities but HyperAttention has $\mathcal{O}(L^3)$ complexity (we did not apply the approximations reducing complexity yet). We deliberately adopted certain design choices that do not favor our proposed models which implicitly favor self attention to ensure a rigorous evaluation. Although these results are preliminary, they consistently indicate that networks incorporating HyperAttention and HyperFeatureAttention achieve better perplexities and cross entropy looses compared to those based solely on standard self-attention. For the comparison over the results, please refer to the *LINK*. As for large-scale experiments with comparisons to SOTA, we have deferred extensive, large-scale experimens for the new models to a forthcoming follow-up study because 1) this paper focuses on fundamental theoretical insights and 2) we have possible variations that preserve the essence of Hyper(Feature)Attention models yet may enhance empirical performance, thoroughly exploring these refinements and different possible architectures involves extensive research. In addition, in the various colliding agent example scenarios in Appendix B.1.3 and "Non-Identical Agents Revisited" in Appendix E.1, with several examples, we step by step induce how HyperFeatureAttention is a natural construction from the theoretical insights on self-attention. Similarly, we explain in Section 6 how HyperAttention is natural construction from our self-attention theories. It is also worth noting that HyperFeatureAttention and HyperAttention are designed to supplement rather than replace existing self-attention mechanisms. For instance, in a multi-head architecture, some heads may employ standard self-attention, while others integrate HyperFeatureAttention (to capture cross-coupled feature interactions) some employ HyperAttention (e.g., order-3 or order-4) to capture higher order interactions -even combination of HyperFeature and Hyper Attention is possible. *RESPONSE 4* (the interpretability of parameters with sinusoidal embedding and response to question 3): We believe that the confusion was merely due to a narration problem. It was stated "..., the learned parameters do not overlap with the parameters we originally devised, especially for the sinusoidal embedding, seen in Figure 2 -these learned parameters lack easy interpretation ... as discussed in Corollary D.5, this outcome is a natural consequence of the generalization theories ... Therefore we focus on" transformed versions of the parameters ... Further our generalization theories are on those transformed versions. "As shown in figures 3 and 4, these matrices are indistinguishable from the theoretical counterparts. With only ${\Theta}(10^{-4})$ mean square distance between their entries. Thus, in this simple setting, we can fully interpret the meaning of the parameters". We will fix narration. *RESPONSE 5*: For references look at RESPONSE 1 to reviewer Ct39.
null
null
null
null
EnsLoss: Stochastic Calibrated Loss Ensembles for Preventing Overfitting in Classification
Accept (poster)
Summary: This paper proposed a novel ensemble framework, EnsLoss, for mitigating overfitting issue for binary classification task. The EnsLoss is motivated by Calibration property studied in the literature and combined with the idea of ensemble, which makes the 'equivalent loss' also calibrated for ensemble training process. Excess risk bound and empirical experiments are provided to justify the proposed method. ## update after rebuttal I keep the same score as weak acceptance as before. The proposed method looks novel to me. And there are some theoretical justification and empirical experiments for supporting the method. Given that there are still some space for further improving from i) multiclass setting; ii) more theoretical (or better) justifications; iii) more empirical baselines for comparison, I don't raise my score. Claims And Evidence: One claim that is not convincing to the reviewer is: the EnsLoss can mitigate overfitting issue. Does EnsLoss enjoy better generalization error bound? Or is the excess risk bound better than fixed single loss? Even for the empirical experiments, there is no demonstration that EnsLoss mitigates overfitting or simply fits the data better. Maybe empirically the training performance should also be presented for supporting the claim. Methods And Evaluation Criteria: Yes, the evaluation generally makes sense to me. The only concern is that there is still a gap to the paper claim 'mitigate overfitting issue'. Theoretical Claims: I didn't check the theoretical proof thoroughly in the appendix but the flow in the main contents looks correct. Experimental Designs Or Analyses: The experimental design and analyses generally make sense to me. However, as mentioned previously, I don't see much connection and justification to 'mitigating overfitting issue'. Supplementary Material: I didn't check supplementary material. Relation To Broader Scientific Literature: The paper contributes a novel ensemble method for binary classification motivated by Calibration property. Essential References Not Discussed: I didn't noticed any missing essential references. Other Strengths And Weaknesses: Strengths: 1. The paper writing is well organized and the flow is clean. 2. The motivation is clear and the proposed ensemble method should be novel. 3. Some theoretical justification and empirical results are provided. Weakness: 1. Neither the theoretical nor empirical results can clearly support that the proposed method can mitigate overfitting issue. There are still a gap to justify the claim (also see previous 'Claims And Evidence'). 2. The binary classification application is restricted. Given that the multi-class classification also have been developed calibration theories in the literature, this work can enjoy higher impact if can be extended. 3. As a novel ensemble method, there is no comparison with other existing ensemble methods. We don't know whether the proposed ensemble method is better, nor if it is compatible with existing ensemble methods. 4. Minor concern: the fonts on Fig 1 and 5 are too small to be viewed clearly. Other Comments Or Suggestions: Based on my previous comments: 1. It would be better for the paper to justify the claim 'mitigate overfitting issue' more clearly. 2. It would be more impactful to extend the method on multi-class classification setting. 3. It would be better to study/show the relations with existing ensemble methods, in order to fully demonstrate the impacts or limitations for the proposed method. Questions For Authors: Please see previous comments, thanks. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > (No theoretical evidence EnsLoss reduces overfitting - missing analysis of generalization error bounds or excess risk) **Reply.** Thank you for this insightful question. We do not explicitly provide the generalization bound, yet following your suggestion, we can show the advantages of *ensLoss* through the lens of Rademacher (Rad) complexity. The Rad complexity plays a crucial role in many concentration inequalities, so it will be helpful to infer the benefits in generalization bounds. The Rad complexity of $\phi$-classification is: $$ \text{Rad}\_{\phi}(\mathcal{F}) := \sup\_{f \in \mathcal{F}}|\frac{1}{n} \sum\_{i=1}^n \tau_i \phi(Y_i f(\mathbf{X}_i))|, $$ where $(\tau_i)$ are i.i.d. Rad random variables. The corresponding Rad complexity for ensLoss is: $$ \overline{\text{Rad}}( \mathcal{F}) := \sup\_{f \in \mathcal{F}} |\frac{1}{n} \sum\_{i=1}^n \tau_i \mathbb{E}\_{\Phi} \Phi (Y_i f(\mathbf{X}_i) )| \leq \mathbb{E}\_{\Phi} (\text{Rad}\_{\Phi}(\mathcal{F})). $$ Hence, Rad complexity of ensLoss is no worse than the average of a set of fixed losses, showing at least one theoretical benefit. Yet, it does not provide conclusive evidence of its superiority over fixed losses. We will honestly report the potential advantages and limitations in our revision. > (No numerical evidence EnsLoss reduces overfitting - training results should be shown alongside test results) **Reply.** We apologize for not including the training results. This was because almost all methods achieved a train accuracy of 1 (or close to 1), as demonstrated below for CIFAR2 (dog-cat) dataset. | Dataset | Model | Loss | Train Acc | Test Acc | |----------|----------|---------|-----------|----------| | CIFAR2 | ResNet34 | ensLoss | 1.00 ± .00 | 0.72 ± .02 | | | | Hinge | 1.00 ± .00 | 0.70 ± .01 | | | | BCE | 1.00 ± .00 | 0.69 ± .01 | | | | EXP | 1.00 ± .00 | 0.60 ± .01 | | | VGG16 | ensLoss | 1.00 ± .00 | 0.81 ± .01 | | | | Hinge | 1.00 ± .00 | 0.78 ± .01 | | | | BCE | 1.00 ± .00 | 0.77 ± .01 | | | | EXP | 1.00 ± .00 | 0.69 ± .00 | Moreover, in Table 6, the accuracy appears to decrease when the model becomes more complex (e.g., from VGG16 to VGG19). This is also a typical phenomenon associated with overfitting. Therefore, we reasonably conclude the improvement by *ensLoss* is mitigates overfitting not simply fits the data better. > (Limited to binary classification - extend to multiclass where calibration theories exist) **Reply.** Thank you for this valuable comments. We agree that extending our work would be beneficial. Our current focus on binary classification is deliberate for several reasons: First, the if-and-only-if condition for calibration in binary case is well-studied and straightforward to implement, allowing us to directly validate our ensemble idea without additional complications. Second, as indicated in Theorem 7 of [1], the iff calibration conditions for multiclass case are substantially more complex. Implementation would require additional specifications in the formulations and sufficient conditions for calibration, as shown in Section 4 of [2] and Section 5 of [1]. While extension is possible using simplified conditions and specific formulations (Prop 15, [1]), yet there remains ongoing debate regarding optimal formulations. Binary case allows us to avoid these debates and directly compare the core ensemble idea. Finally, binary classification itself is a fundamentally important problem with many applications. We believe that improvements are also highly significant. **References:** 1. Tewari & Bartlett (2007). On the Consistency of Multiclass Classification Methods. JMLR. 2. Zhang (2004). Statistical analysis of some multi-category large margin classification methods. JMLR. > (No comparison with existing ensemble methods) **Reply.** Agree. To illustrate, we report test performance for model averaging (m-ave) and majority voting (m-vote) on CIFAR2 (dog-cat) using ResNet101 over three fixed losses: ||Acc|AUC| |-|-|-| |BCELoss|67.33(.23)|73.35(.24)| |EXP|53.62(.48)|54.86(.35)| |Hinge|67.12(.32)|68.60(.66)| |**m-ave**|68.14(.25)|74.35(.26)| |**m-vote**|67.93(.26)|---| |ensLoss|**70.07(.90)**|**76.60(.87)**| As shown, ensembling indeed improves single fixed loss methods, but enLoss still performs better. We observed these conclusions across datasets and will add details in the revision. The challenge with ensembling methods is that selecting too many fixed losses requires substantial computational costs of training, while combining too few losses produces less effects. A key advantage of ensLoss is that it requires only a single model training. > (the fonts on Fig 1 and 5 are too small) **Reply.** Thank you for comments. We will revise them in the revision.
Summary: This paper considers the problem of binary classification, where it proposes a method called EnsLoss that combines with stochastic gradient descent (SGD) to obtain the optimization objective to train a classifier. The key idea of EnsLoss is based on the convenient classification-calibrated condition (Bartlett+, 2006) that the derivative at 0 is negative for a bounded-from-below convex loss is sufficient to state that a loss is calibrated. Based on this information, Ensloss tries to obtain a reasonable derivative that gives rise to a classification-calibrated loss. Theoretical analysis of classification and excess risk bound confirms the validity of the proposed method. Experiments show that with SGD, the proposed Ensloss outperformed using a fixed loss function (e.g., Logistic, Hinge, EXP) significantly, while it can also combine with other regularization schemes (e.g., dropout). ## update after rebuttal I have read the rebuttal and appreciate the authors' response to my concerns. I think this paper has merits that outweigh its flaws, and I want to keep my score on the acceptance side. The contribution is limited to binary classification is acceptable to me as it is an important problem setting, and I feel the extension to the multi-class might require a further significant amount of work as the classification calibration condition is not as simple as binary classification. Claims And Evidence: 1. Methodologically, the proposed method is clear and convincing. 2. The experimental results showed superiority of the proposed method. But I still feel there is no clear explanation why this method is better than a fixed loss. Since a fixed loss can also be classification-calibrated and has a reasonable excess risk bound. Methods And Evaluation Criteria: 1. Many datasets were used in the experimental result section. The evaluation criteria: accuracy and AUC (in the appendix) are reasonable. 2. This paper kindly provides the section of compatibility of the proposed method with other regularization methods, which nicely shows the usefulness of Ensloss. Theoretical Claims: 1. I briefly checked the proof and think the result is reasonable. Experimental Designs Or Analyses: 1. Experimental results are reasonable. 2. Unfortunately, it would be nice to have more ablation studies to highlight the importance of each condition, e.g., whether Line 13 is important (bounded below). In the empirical sense, I am not sure if this is needed, as function class is already somewhat bounded in practice. Supplementary Material: I read supplementary material and found some interesting experimental results of different $\lambda$ and the discussion about the different right tail. Also, I feel Appendix A regarding Algorithm 2 should be in the main body of the paper since it is important to understand Algorithm 1. I think the current form is not self-contained enough. On the other hand, some related works discussion could be shortened because this part (Algorithm 2) is essential to understand Algorithm 1. Relation To Broader Scientific Literature: It is called Ensloss, but I also feel it is more like a design of objective that bypasses the loss design but goes straight to the derivative design for using stochastic gradient descent. Perhaps additional discussion on derivative-design for SGD for several problems might be nice. Essential References Not Discussed: I do not have any additional suggestions. Other Strengths And Weaknesses: Strengths 1. Theoretically justified. 2. Easy to implement under SGD framework. 3. Paper is easy to follow. 4. Many datasets were used in the experimental results, which I found impressive. Weaknesses 1. Limited to binary classification 2. Limited to SGD optimization algorithm 3. The algorithm seems to require more training time to stabilize the training accuracy as noted in the conclusion section. 4. I still feel it is unclear why Ensloss can significantly outperform logistic loss and hinge loss. Other Comments Or Suggestions: Overall, I think the paper is easy to follow. Suggestions on writing: 1. Please include the discussion of Algorithm 2 if possible as it is important for understanding Algorithm 1. Or try to rewrite Algorithm 1 in a way that doesn't rely too much on Algorithm 2. 2. I know dropout is very popular and common, but I think in the discussion of related work, there is no description at all how it works, i.e., randomly deactivates a subset of neurons during training. 3. What is Assumption 1 in Theorem 3.6? Do you mean Assumption 3.4? Assumption 1 is also referred to in the appendix section. Current score before the rebuttal: Overall, I think the idea to make classification-calibrated loss without directly defining a loss for binary classification is interesting. The proposed method is reasonable and achieves good experimental results. I listed some weaknesses in the weaknesses section above. Questions For Authors: 1. I still feel it is unclear why Ensloss can significantly outperform logistic loss and hinge loss. Could you explain intuitively why this is the case? 2. Do exp, hinge, logistic, squared losses satisfy superlinear raising tail condition? If not, then is there any well-known fixed loss that satisfies this condition? 3, What is z (not bold) in line 15, I failed to follow this part. Is it a constant for all data points in the mini-batch? 4. Regarding bounded below condition, I believe it might not be a big concern if we assume f is bounded. For example, there is an unhinged loss which is simply a linear loss $\phi(z)=1-z$ that has robustness and is classification-calibrated if function is bounded (proposition 5: Learning with Symmetric Label Noise: The Importance of Being Unhinged: https://arxiv.org/abs/1505.07634). Do you find this condition highly important in the experiment? 5. Can we know the form of the risk minimizer of the Ensloss? For example, hinge loss risk minimizer will be sign(p(y=1|x)-0.5), 2p(y=1|x)-1 for squared loss. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > (No clear explanation of why EnsLoss outperforms fixed losses) > > (Intuitive explanation) **Reply.** We appreciate the opportunity to clarify why ensLoss outperforms fixed losses. 1. An intuitive explanation: - ensLoss is a method conceptually similar to Dropout, offering benefits through ensemble effects. - More importantly, for fixed losses, the only truly "necessary" condition is calibration, with potential overfitting arising from extra structures (e.g., hinge loss pushes classifiers to exactly +1/-1). ensLoss *not* require to satisfy a specific risk minimizer, instead seeking a good classifier across various random losses while maintaining calibration. This can be considered a form of *regularization*—we avoid pushing the classifier to excessively satisfy any particular loss, thereby reducing overfitting. 2. While calibration and excess risk bounds are crucial to verify consistency, they are not necessarily informative about comparative performance on specific datasets. Distribution-free upper bounds are often not tight under particular datasets, making it difficult to determine which loss will perform better on a given dataset. 3. We add theoretical support for ensLoss through Rademacher (Rad) complexity. The Rad for ensLoss is: $$ \overline{\text{Rad}}(\mathcal{F}):=\sup\_{f \in \mathcal{F}}|\frac{1}{n}\sum\_{i=1}^n \tau_i \mathbb{E}_{\Phi} \Phi(Y_i f(\mathbf{X}_i) )|\leq \mathbb{E}\_{\Phi}(\text{Rad}\_{\Phi}(\mathcal{F})), $$ where $\tau_i$ are i.i.d. Rad random variables, and $\text{Rad}\_{\Phi}(\mathcal{F})$ is the Rad of a fixed $\phi$-classification. Hence, Rad complexity of ensLoss is no worse than the average of a set of fixed losses, showing at least one theoretical benefit. > (Ablation studies on "bounded below") **Reply.** We greatly appreciate this comment. This is precisely an important step when developing ensLoss: we also initially overlooked this condition, but discovered that is indeed critical. To demo its effect, we reported typical train/test Acc for CIFAR2 (dog-cat) using VGG16 w/wo the bounded below (BB) condition (showing select epochs). |Epoch|w/o BB||w/ BB|| |-|-|-|-|-| ||Train Acc|Test Acc|Train Acc|Test Acc| |20|0.50|0.49|0.96|0.74| |60|0.50|0.51|1.00|0.83| |120|0.50|0.49|1.00|0.83| |160|0.51|0.49|1.00|0.83| |200|0.50|0.49|1.00|0.83| *Finding* - w/o: model fails to learn (random-level train accuracy) - w/: effective learning and prediction *Explanation* From a loss-derivative perspective, Line 13 discounts gradients for correctly classified samples ($z=yf(x)>1$), focusing optimization efforts on misclassified/boundary points—a standard loss design principle. > (Limited to binary classification) **Reply.** Agree, we will discuss the limitation and potential extensions, ensLoss can be extended to more general ML tasks with calibration conditions. Also, we focus on BC because: - The iff condition for calibration in BC is well-studied and straightforward to implement, allowing us to directly validate our ensemble idea without additional complications. - BC itself is a fundamentally important problem with tremendous applications. We believe that improvements are also highly significant. > (ensLoss requires longer time for stabilization) **Reply.** We agree this limitation and provide precise time comparisons. We report the *min epoch* for stabilization (after which train acc remains within an error margin of 0.005) on CIFAR2 (dog-cat). ||BCE|Hinge|ensLoss| |-|-|-|-| |ResNet101|80|60|160| |ResNet34|70|25|145| |VGG16|15|25|55| |VGG19|45|35|150| As indicated, ensLoss typically requires 2-3x more epochs to stabilize, which remains acceptable. > (Discussion on derivative-design for SGD) > > (Limited to SGD) > > (Suggestions on writing) > > (typos of $z$ in Algo 1) **Reply**. Agree, we will: (1) move Algo 2 into the main text; (2) add a description of dropout; (3) discuss derivative-design methods; (4) discuss the limitations of relying on SGD; and (5) correct typos: such as replacing $z$ with $\gamma$. > (Superlinear for existing losses) **Reply.** All the mentioned losses (exp, hinge, logistic, and square) satisfy the superlinear tail condition (proofs omitted due to space constraints). > (The "bounded below" may be unnecessary if f is bounded) **Reply.** Yes, if $f$ is bounded, the condition can be relaxed. In fact, for a convex loss, boundedness of $f$ is a sufficient condition of boundedness of $\phi$. Assume $|f(x)| \leq B$, $|z| = |y f(x)| \leq B$. Since $\phi$ is convex, it is continuous, thus $\phi$ is bounded for $|z| \leq B$. > (risk minimizer of the Ensloss) **Reply.** Thank you for this insightful question. Unlike fixed losses, ensLoss has no exact form of risk minimizer, as the minimizer would depend on the distribution of the underlying losses. Yet, as indicated in Theorem 3.6, the method is calibrated, ensuring that the minimizer of ensLoss shares same sign as $P(Y=1|x) - 0.5$, although we cannot determine its exact form. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for providing detailed responses to my concerns. 1. Intuitive explanation on Ensloss. I appreciate the intuitive explanation of Ensloss: The flexibility of the optimal solution function form of Ensloss minimizer might make it easier to optimize accuracy **than** a fixed loss function, where a risk minimizer form is usually fixed. Regarding Rademacher complexity of using Ensloss is smaller or at least as large as using a fixed loss. I found it strange to say something like the lower Rademacher complexity indicates a better result. If we only compare Rademacher complexity, then what we can say is that the function class f learned with Ensloss is less complex than using a fixed loss, hence less flexible. Perhaps we need to directly compare the estimation error bound or something like that to say the advantage. 2. Bounded below condition: I was surprised that this condition is so critical. Thanks for the result! 3. Slower than fixed loss: I think it is a good idea to clearly write (which the submission already did) in the paper that the disadvantage is that Ensloss training is slower to finish. I think this is not a big concern, and one may try to optimize it in future work. I am still leaning toward the accept side I think I did not have a misunderstanding in this paper. I will incorporate this information in the discussion phase with other reviewers. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful and careful review as well as your prompt reply! We apologize that due to the word limit in the rebuttal, we could not clearly express and explain the results of Rademacher complexity. We would like to take this opportunity to provide more details and clarify further. When comparing the Rademacher complexities for *ensLoss* and fixed losses, the functional class $\mathcal{F}$ used in both cases is the same; the differences in Rademacher complexities arise solely from the *different loss functions*. With a fixed loss $\phi$, the Rademacher complexity is defined as \begin{equation*} \text{Rad}\_{\phi}( \mathcal{F} ) := \sup\_{f \in \mathcal{F}} \Big| \frac{1}{n} \sum\_{i=1}^n \tau\_i \phi\big(Y\_i f(\mathbf{X}\_i) \big) \Big|, \end{equation*} where $(\tau_i)_{i=1}^n$ are i.i.d. Rademacher random variables independent of $(\mathbf{X}\_i, Y\_i)\_{i=1}^n$. In contrast, the corresponding Rademacher complexity for the proposed ensemble loss method is: \begin{equation} \overline{\text{Rad}}\_{\text{ens}}( \mathcal{F} ) := \sup\_{f \in \mathcal{F}} \Big| \frac{1}{n} \sum\_{i=1}^n \tau\_i \mathbb{E}\_{\Phi} \Phi\big(Y\_i f(\mathbf{X}\_i) \big) \Big| \leq \mathbb{E}\_{\Phi} \big( \text{Rad}\_{\Phi}(\mathcal{F}) \big). \end{equation} This inequality suggests that the complexity of the ensemble method is not worse than the average of the complexities corresponding to the fixed losses in the family. According to Theorem 6.4 in [1], a lower Rademacher complexity improve the upper bounds of the estimation error of the empirical risk minimization method. We absolutely agree with your observation that comparing complete estimation error bounds would yield a more convincing results. In our revised manuscript, we will include estimation error bounds to further discuss the potential benefits of *ensLoss* while also offering a transparent discussion of the limitations of our theory. We hope our reply and rebuttal will enhance our paper and provide necessary explanation and clarification. We respectfully hope you might consider these points in your final assessment. Thank you again for your thoughtful review! **Reference** [1] Zhang, T. (2023). Mathematical Analysis of Machine Learning Algorithms. Cambridge University Press.
Summary: The paper introduces EnsLoss, a novel ensemble learning method that applied the idea of ensemble to loss functions during model training within the empirical risk minimization (ERM) framework. Specifically, instead of explicitly using one loss functions, the author proposes to randomly sample loss functions on the fly during ERM. In practice, the authors propose transforming convexity and calibration conditions into loss-derivatives, allowing for direct generation of calibrated loss-derivatives. The authors verifies the validity of the proposed method through theoretical analysis. They also conduct experiments across multiple datasets and deep learning architectures to demonstrate its superiority over fixed loss methods. Claims And Evidence: I think the authors of the paper provides some good theoretical justification to the proposed method. However, I feel that empirical verification is weak. Specifically, according to Table 4, the authors of the paper only conduct experiments with CIFAR2, which is puzzling to me. Why don't we use CIFAR10? Moreover, the accuracies of the CIFAR2 results seems rather low. For instance, according to Table 6, the accuracy of CIFAR2 with ResNet34 is approximately 76%. Prior literature has shown that similar architecture achieves 90%+ accuracy on the CIFAR10 dataset. What is causing the huge gap? As such, I am not confused that the proposed method is preferred over BCE loss. Methods And Evaluation Criteria: As argued above, I don't think the evaluation criteria makes sense for the problem. Theoretical Claims: I briefly went over the claims, but did not have time to go through all the proofs provided in the appendix. In general, the claims makes sense theoretically, and seems technically sound. Experimental Designs Or Analyses: Yes. As argued above, I am not confident about the experiments conducted by the authors. Supplementary Material: I briefly looked at the experimental results of the appendix and appendix A. I also think that some of the materials in appendix A (algorithm 2 specifically) should be in the main text, in order to fully understand the proposed method. Relation To Broader Scientific Literature: The proposed method is a general loss function that can be applied to training of all sort of machine learning models. Essential References Not Discussed: I think the authors needs additional discussion about different loss functions proposed to train neural networks. While the idea of dynamically sample loss functions is new ,a lot of prior works have been proposed that changes the gradients of the loss functions. Some of the examples include: 1. Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017. 2. Leng, Zhaoqi, et al. "Polyloss: A polynomial expansion perspective of classification loss functions." arXiv preprint arXiv:2204.12511 (2022). 3. Zhang, Zhilu, and Mert Sabuncu. "Generalized cross entropy loss for training deep neural networks with noisy labels." Advances in neural information processing systems 31 (2018). 4. Ma, Xingjun, et al. "Normalized loss functions for deep learning with noisy labels." International conference on machine learning. PMLR, 2020. Other Strengths And Weaknesses: I think the idea of randomly sampling loss functions, while incremental arguably, is interesting. However, I am not fully convinced why, both theoretically and empirically, that the proposed loss function is better. Other Comments Or Suggestions: N.A. Questions For Authors: Why do we want a loss ensemble instead of a fixed loss function from a theoretical standpoint? Any benefits that we can show? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > (Empirical verification weak - only tested on CIFAR2, not CIFAR10.) **Reply.** To clarify, our current work focuses specifically on *binary classification*. Since CIFAR10 is a multi-class dataset with 10 categories, we derived binary classification problems from it by creating all possible pairwise combinations of the original classes, which resulted in 45 distinct CIFAR2 datasets (calculated as ${10}\choose{2}$ = 45 pairs). > (The accuracies of the CIFAR2 results seems rather low compared to CIFAR10 in literature) **Reply**. The performance gap you've noted stems from several differences between CIFAR10 and CIFAR2: 1. **Dataset size**: Each of our CIFAR2 datasets contains only 10,000 training images (5,000 from each of the two selected classes), whereas the full CIFAR10 dataset contains 50,000 training images. This significant reduction in training data naturally impacts model performance. 2. **Classification difficulty variation**: As shown in Figure 4 of our manuscript, the accuracy of CIFAR2 varies substantially depending on which pair of classes is selected, ranging from 68% to 96%. Some binary classification tasks (e.g., distinguishing between cats and dogs) are inherently more challenging than others (e.g., distinguishing between automobile and horse). Therefore, the 90%+ accuracy for CIFAR10 is a 10-class classification task with full training data. Our experimental setup is fundamentally different, making direct accuracy comparisons is not meaningful. In addition, our experimental design aims to provide a fair comparison across different loss functions and the proposed ensLoss. The implementation settings for each method are identical, with the only difference being the loss functions used. Additionally, we have provided anonymous GitHub open source code to enable replication of our results. > (Algorithm 2 from Appendix A should be in main text for better understanding) **Reply**. Thank you for the suggestion. In our revision, we will move Algorithm 2 and the relevant explanatory content from Appendix A into the main text, specifically after Section 3.2 before Section 3.3 where we introduce our methodology. > (Need discussion of prior works on modified loss functions/gradients for neural networks) **Reply.** Thank you for highlighting these important references on loss function modifications. We agree that our work could benefit from being positioned within the broader context of gradient-based loss function innovations. The papers you've mentioned represent significant contributions to loss function design. Focal Loss addresses class imbalance by down-weighting well-classified examples. PolyLoss provides a unified framework through polynomial expansions. Generalized Cross Entropy and Normalized Loss Functions tackle noisy label scenarios through robust formulations. As you mentioned, our approach differs in that we dynamically sample from a collection of loss functions rather than proposing a fixed modified loss. In our revision, we will expand Section 2 to include these references and others, providing a more comprehensive discussion of loss function design from a gradient perspective. > (Unconvinced about theoretical and empirical advantages of randomly sampling loss functions) **Reply.** For empirical evidence demonstrating superior of *ensLoss*, please refer to our reply to the first two points, which clarifies why CIFAR10 (and its performance) is not appropriate to be used or referenced in binary classification. For theoretical justification, we supplement the theoretical advantages of *ensLoss* through the lens of Rademacher complexity. The Rademacher complexity plays a crucial role in determining the convergence rate of the excess risk (with smaller values yielding a faster rate). Specifically, given a classification function space $\mathcal{F}$, the Rademacher complexity of $\phi$-classification are defined as follows: $$ \text{Rad}_{\phi}( \mathcal{F} ) := \sup\_{f \in \mathcal{F}} | \frac{1}{n} \sum\_{i=1}^n \tau_i \phi (Y_i f(\mathbf{X}_i) ) |, $$ where $\tau_i$ are i.i.d. Rademacher random variables. On this ground, the corresponding Rademacher complexity for the proposed ensemble loss method can be formulated as: $$ \overline{\text{Rad}}( \mathcal{F} ) := \sup\_{f \in \mathcal{F}} | \frac{1}{n} \sum\_{i=1}^n \tau_i \mathbb{E}_{\Phi} \Phi (Y_i f(\mathbf{X}_i) ) | \leq \mathbb{E}\_{\Phi} ( \text{Rad}\_{\Phi}(\mathcal{F}) ), $$ where the inequality follows from the Jensen's inequality. This simple deduction yields a positive result, that is, the Rademacher complexity of the ensemble loss is no worse than the average based on the set of fixed losses. Notably, identifying an effective fixed loss is often challenging due to the varying data distributions across different datasets, which highlights the potential of the ensemble loss as a promising solution. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for providing me with feedback. I have read through the rebuttal and along with other reviewer's comments. While my concerns regarding empirical results were partially resolved, I am still not fully convinced how broadly applicable the proposed loss is, since it only works for binary classification. I have updated my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and for improving the rating of our paper. Regarding the limitations of our work in binary classification and its potential extension to more general machine learning problems, we would like to provide additional information to explain these points. **Potential extension** The primary motivation behind our paper consists of two components: "*ensemble*" and the "*calibration*" of the loss functions. Therefore, this concept can be applied to various ML problems, by identify the appropriate consistency/calibration conditions for different tasks. Fortunately, some consistency conditions have been extensively studied in the literature, including Section 4 in [1] and Theorem 1 in [2] for multi-class classification, Theorem 2 in [3] for bipartite ranking or AUC optimization, and Theorem 3.4 in [4] for asymmetric classification. Having demonstrated the feasibility of our approach in binary classification, these established consistency conditions can be incorporated into our ensemble loss framework, presenting promising directions for future research and empirical validation. We will elaborate on these extensions in our revised manuscript. **Why binary classification** We would also like to explain why we currently focus on binary classification: First, the if-and-only-if condition for calibration in binary classification is well-studied and straightforward to implement, allowing us to directly validate our ensemble approach without additional complications. We believe that is a natural proof of concept before extending to more complex ML tasks. Second, despite its apparent simplicity, binary classification remains a fundamental problem with significant methodological importance and practical relevance. It serves as a building block for many modern machine learning applications. We believe that methodological advancements in binary classification have substantial value to the field and broad implications for related problems. We respectfully hope you might consider these extension possibilities in your final assessment. While we acknowledge the current binary classification focus, and we hope our contributions in this fundamental setting can be valued. Thank you again for your feedback. **References** [1] Zhang, T. (2004). Statistical behavior and consistency of classification methods based on convex risk minimization. The Annals of Statistics, 32(1), 56-85. [2] Zou, H., Zhu, J., & Hastie, T. (2008). New multicategory boosting algorithms based on multicategory fisher-consistent losses. The Annals of Applied Statistics, 2(4), 1290. [3] Gao, W., & Zhou, Z. H. (2012). On the consistency of AUC pairwise optimization. IJCAI (pp. 939-945), 2015. [4] Scott, C. (2012). Calibrated asymmetric surrogate losses. Electronic Journal of Statistics, 6:985-992, 2012.
Summary: This paper introduces EnsLoss, a stochastic ensemble learning method specifically designed to mitigate overfitting in classification tasks. The main idea is is to ensemble different surrogate loss functions during SGD. The authors provide theoretical analysis and empirical results across various datasets. The results indicate that EnsLoss consistently outperforms fixed-loss methods, particularly for overparameterized models like ResNet. Claims And Evidence: Strenghts: 1. The authors provide empirical results that show consistent improvements over fixed losses. 2. EnsLoss is compatible with prevent-overfitting methods (section 4.4). Weaknesses: 1. The alrogithm/practial experiments and theoretical framework (some conditions and proof) mismatch, see detailed questions in Theoretical Claims. 2. Limited scope of experiments (only binary classification, and small-scale image and tabular datasets) and also need to provide more ablation studies and baselines, see detailed questions in Methods And Evaluation Criteria and Experimental Designs Or Analyses. Methods And Evaluation Criteria: Strenghts: 1. The conditions for loss derivatives are derived from established theory 2. The ensemble risk framework (Theorem 3.6) connects the proposed method to statistical consistency. 3. It is pratical that the EnsLoss can be compatible with exisinting SGD based pipelines. Weaknesses: 1. No ablation studies on Inverse Box-Cox transformation (9 in Algorithm 1) 2. All experiments only focus on binary classification, which limits the scope of the proposed methods. 3. The authors mentioned the limition of EnsLoss requiring more epochs in general, but there is no evaluations on computational costs. Theoretical Claims: Weaknesses: 1. The proof of Theorem 3.6 assumes a finite loss space via Lemma 3.5, but the algorithm generates loss derivatives stochastically which implys an infinite space. The authors argue that fixing the number of epochs makes finite in practice, but this is not formally proven. 2. The inverse Box-Cox transformation in Algorithm 2 can generate gradients from exponential distributions, so produces unbounded gradients, which violates the condition 3 in Assumption 3.4. 3. The theoretical framework assumes pre-defined losses (possibly random but static) as Eq 4, while the algorithm dynamically generates losses via SGD. The stochasticity of loss sampling is not addressed in the proofs. Experimental Designs Or Analyses: Weaknesses: 1. The authors choose $p=1$ for simplicity in the actual implementation in line 263, but in Lemma 3.2 the authors propose taking $g(z)=1/z^p$ with $p>1$. It may lead the loss unbounded below. 2. The authors mentioned "Another benefit of ENSLOSS is its relative insensitivity from time-consuming hyperparameter tuning" in Lines 405-406, but the $\lambda$ is still a hyperparameter affecting the results (Table 5). 3. Since authors are inspired by ensemble ideas, there should be more baselines to be compared, such as ensembling fixed loss 4. Longer training with traditional loss functions might mitigate overfitting similarly, so it is better to provide more detailed analysis on whether EnsLoss truly provides advantages beyond simply running more epochs. Supplementary Material: Yes, Part A, Proof of Theorem 3.6, Lemma 3.2 and 3.3 Relation To Broader Scientific Literature: EnsLoss extends the idea of dropout to the loss space while grounding it in calibration theory. However, the authors only conduct experiments on binary classficiation, which leave open questions about its broader applicability. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: If retaining $p=1$, it is better to provide additional explicit conditions or constrants on the random loss derivatives, or clearly state the theoretical limitation of Lemma 3.2 Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > (No ablation studies on invBC transformation) **Reply.** Thank you for the comment. Table 5 partially shows our invBC ablation studies: (1) Fixed case ($\lambda=0$) used in main experiments, where invBC is somewhat "deactivated" with gradients sampling from a fixed log-normal distribution; (2) Dynamic case with varying $\lambda$ sampled at different periods, where invBC is activated. This comparison reveals invBC's impact. We agree these results are insufficient and will include expanded analysis in our revision. > (Limited to binary classification) **Reply.** We agree and will discuss that ensLoss can be extended to more ML tasks with calibration conditions. We focused on binary classification (BC) because: 1. The if-and-only-if condition for calibration in BC is well-studied and straightforward to implement, allowing us to directly validate our ensemble idea without additional complications. 2. BC itself is a fundamentally important problem with many applications. We believe that improvements are also highly significant. > (No evaluation of computational costs of ensLoss with additional epochs) **Reply.** Agree. We now provided time comparisons: report the *min epoch* required for training to stabilize (after which train accuracy remains within an error margin of 0.005) on the CIFAR2 (cat-dog) as a demo. ||BCE|Hinge|ensLoss| |-|-|-|-| | MobileNet|90|40|110| | MobileNetV2|80|35|90| | ResNet101|80|60|160| | ResNet50|45|35|150| | VGG16|15| 25|55| | VGG19|45|35|150| As indicated, *ensLoss* training typically requires 2-3x more epochs to stabilize, which remains acceptable. > (Theorem 3.6 assumes finite loss space via Lemma 3.5) **Reply.** To clarify, Theorem 3.6 relies solely on Assumption 3.4 (A3.4) and NOT on finite loss space. Lemma 3.5 provides only a sufficient condition for A3.4. E.g., a loss space $\phi(z)=(1-\beta z)\_+$ with $\beta \sim$ U(1,2), contains infinitely many losses, yet satisfies A3.4. We will clarify it in our revision to prevent confusion. > (invBC can produce unbounded gradients) **Reply.** Thanks for this key insight. In our implementation, we applied clipping at -1 for the loss-derivatives (Line 46 in `loss.py`). This was done initially because BCE and hinge gradients are both bounded by 1, ensuring fair comparison by eliminating any potential effect from differences in loss-derivative magnitudes. We will add the clipping step between Lines 9-10 in Algo 1. > (Theory assumes static losses while algo uses dynamic losses) > > ($p=1$ is used but theory requires $p>1$) **Reply.** This concern may be related to the finite loss space assumption mentioned earlier. Our framework (4) requires only that losses satisfies Assumption 3.4, not enumeration of all possible losses. For our implementation, Algo 1's generated loss-derivatives naturally satisfy conditions 1 and 3. We agree a small gap in condition 2 between theory $(p > 1)$ and algo $(p=1)$, but it really has a negligible effect on numerical results. E.g., using $p=1+10^{-10}$ would satisfy theoretical conditions with negligible numerical difference, as $z^{10^-10}$ falls beyond `float32` precision even for huge $z$ values. > (About insensitivity of tuning) **Reply.** We appreciate the careful reading. To clarify, - We used fixed $\lambda=0$ in all main experiments, achieving strong performance without tuning. - Table 5 merely explores invBC effects, not suggesting necessary tuning. Unlike methods requiring careful adjustment of parameters, *ensLoss* needs minimal tuning effort. We will clarify that Table 5 is for ablation study of invBC not suggesting tuning necessity. > (Comparison with ensembling fixed losses) **Reply.** Agree. To illustrate, we report test performance for model averaging (m-ave) and majority voting (m-vote) on CIFAR2 (dog-cat) using ResNet101 over three fixed losses: ||Acc|AUC| |-|-|-| |BCELoss|67.33(.23)|73.35(.24)| |EXP|53.62(.48)|54.86(.35)| |Hinge|67.12(.32)|68.60(.66)| |**m-ave**|68.14(.25)|74.35(.26)| |**m-vote**|67.93(.26)|---| |ensLoss|**70.07(.90)**|**76.60(.87)**| As shown, ensembling indeed improves single fixed loss methods, but enLoss still performs better. We observed these conclusions across datasets and will add details in the revision. The challenge with ensembling methods is that selecting too many fixed losses requires substantial computational costs of training, while combining too few losses produces less effects. A key advantage of ensLoss is that it requires only a single model training. > (Need evidence that EnsLoss offers advantages beyond just running more epochs) **Reply.** We clarify that all methods used the same number of epochs. We also provided epoch-level comparisons, as shown in Fig 1, where performance stabilizes after 100-150 epochs, with ensLoss maintaining its advantage. This pattern appeared across all datasets. We'll include more results in our revision. The persistent gap confirms ensLoss benefits aren't simply from long-epoch training.
null
null
null
null
null
null
CRANE: Reasoning with constrained LLM generation
Accept (poster)
Summary: The authors prove that constrained LLM generation diminishes the capabilities of LLMs. The (not precise) essence of this result is that 1. Under constrained decoding, logspace-uniform threshold circuits (in $TC^0$) can obtain the same outputs (solve the same problems) as LLMs. 2. Under unconstrained decoding, LLMs can simulate Turing machines (in $NL$). Because these two computational models belong to different complexity classes, it follows that there are problems (like st-connectivity) which LLMs can solve in an unconstrained setting and cannot solve in a constrained setting, (unless $TC^0 = NL$). The second part establishes that augmenting the grammar to allow unconstrained decoding for reasoning (then following by the constrained decoding) allows one to obtain constrained output and simulate the same Turing machine as an unconstrained LLM. To bring this result to the realm of practical applicability, the paper proposes CRANE, a method to interleave constrained and unconstrained LLM decoding using an augmented grammar. Parts that require constrained decoding are generated between specific symbols (like << and >>), and there can be multiple such parts. ## Update after rebuttal I would like to thank the authors for their insightful comments. I will maintain my score. Claims And Evidence: The theoretical results are proven. The empirical results are strong and demonstrate the effectiveness of the proposed CRANE algorithm. Methods And Evaluation Criteria: The benchmarks are appropriate and state-of-the-art. Theoretical Claims: I couldn't verify the proofs of the theoretical claims. Experimental Designs Or Analyses: I'm not sure what the ablation study on few-shot examples ablates, I don't see anything that's ablated. Supplementary Material: I have read the supplementary material. Relation To Broader Scientific Literature: The paper is related to the literature of constrained decoding. The theoretical part is also related to complexity theory and the relationship between LLMs and computational models like boolean circuits or the Turing machines. Essential References Not Discussed: I think the RICHES framework (https://arxiv.org/abs/2407.00361v1) is somewhat related in that it interleaves unconstrained decoding with exact retrieval keys, and the present work interleaves unconstrained and constrained decoding. Other Strengths And Weaknesses: I really liked the discussion about the limitations. Other Comments Or Suggestions: - this part could be made more exact, it sounds like one can choose whether to use to use just unconstrained or just constrained strategies: "CRANE is compatible with various decoding strategies, both constrained and unconstrained" - Typo in Proposition 3.1: thershold - This could be rephrased: "This approach allows a simple and cost-efficient approach" - Typo on line 373: comparisong Questions For Authors: In Proposition 3.3, there is just one reasoning part before the single constrained decoding part. In CRANE, these two are freely interleaved with multiple constrained decoding parts. Do the results in Proposition 3.3 generalize naturally to this case? Also, is it always true that the final answer is at the end? Why? The paper says that CRANE works with an arbitrary LLM and constrained decoding algorithm. How do these generate the start and end delimiters? I have seen that the delimiters are in the prompt, but doesn't the grammar of the constrained generation also need to be modified to include the delimiters, like in C.5.1? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer 6Prs, Thanks for your constructive feedback. > Q1. In CRANE, these two are freely interleaved with multiple constrained decoding parts. Do the results in Proposition 3.3 generalize naturally to this case? Is it always true that the final answer is at the end? **R1:** Proposition 3.3 shows that an ideal LLM (i.e., with a perfect set of parameters) can use constrained decoding with the augmented grammar $G_a$ (without any unconstrained steps) to simulate the steps of a given Turing machine and produce the final output only at the end. This result also naturally extends to CRANE, which interleaves unconstrained and constrained generation using $G' \to S_1GS_2$, where $G$ is the output grammar and $S_1, S_2$ are delimiters. The only modification needed in the proof is to configure the ideal LLM from Proposition 3.3 to wrap the final output in delimiters $S_1$ and $S_2$. In this case, the ideal LLM with CRANE can simulate the Turing machine’s steps and write the answer at the end. We will include a formal proof in the revised version of the paper. Practical LLMs may not have the specific set of parameters for the theoretical construction as in the ideal LLM and may not strictly follow this ideal format, where the output is generated only at the end. Instead, they may generate multiple strings that are parsable within the grammar (e.g., syntactically valid mathematical expressions). In such cases, following the common practice for CoT [1,2,3], for both CRANE and unconstrained reasoning, we greedily select the last parsable string to compute the functional accuracy. It is possible that the final answer is not generated at the end, and better selection algorithms could further improve the functional accuracy of CRANE. However, we leave this for future work. Nevertheless, our experiments demonstrate that CRANE, even with this simple selection strategy, outperforms direct final answer generation (whether in constrained or unconstrained modes) and unconstrained reasoning. [1] "Large Language Models are Zero-Shot Reasoner", Advances in Neural Information Processing Systems 35 (NeurIPS 2022)\ [2] "GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models", The Thirteenth International Conference on Learning Representations (ICLR 2025)\ [3] https://huggingface.co/datasets/apple/GSM-Symbolic#answer-extraction-heuristic > Q2. Doesn't the grammar of the constrained generation also need to be modified to include the delimiters, like in C.5.1? **R2:** The constrained decoding algorithm must be initialized with the augmented grammar that includes the delimiter symbols. We mentioned this in the pseudocode (Algorithm 1, lines 3 and 4). The grammars in C.5.1 also include the delimiter symbols. For example, line 916 of the GSM-Symbolic grammar includes `<<` and `>>`. We will update the paper to clarify this detail. > Q3. Typos and writing suggestions. **R3:** Thanks for pointing this out. We will fix them in revised version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your insightful answers! I will maintain my score.
Summary: The paper introduces CRANE (Constrained Reasoning Augmented Generation), a decoding algorithm for grammar-constrained large language model (LLM) generation that aims to balance syntactic correctness with reasoning capabilities. The work first provide a theoretical explaination for why constrained generation diminish the expressive power of LLMs. It then proposes an augmented grammar soltuion that allow intermediate reasoning steps to occur unrestricted first before restricted generation. The approach is demonstrated to improve both syntactic correctness and functional correctness. ## update after rebuttal The rebuttal addresses my questions. I remain positive on the paper. Claims And Evidence: + Claims are generally well-supported by theoretical propositions and empirical evaluation. Specifically, the work shows improvements on two benchmark datasets compared to baseline approaches. Methods And Evaluation Criteria: + The evaluation methods are appropriate. The work evaluate its approach on two symbolic reasoning datasets (GSM-symbolic, and FOLIO) and uses evaluation tools such as Z3 solver and Prover9 for validation the correctness. Theoretical Claims: + The author presents clearly the limitation of prior work on providing only empirical studies. This work instead provide theoretical studies on why constrained generation limits the reasoning capability of the LLM. The theoretical analysis seems sound. Experimental Designs Or Analyses: The experimental designs seem sound. The proposed approach is compared to relevant approaches on two benchmarks using multiple LLM models. Supplementary Material: I didn't review the supplementary material. Relation To Broader Scientific Literature: The work clearly relates to prior works on constrained decoding. It provides two main key contributions based on its position: 1. The theoretical analysis explaining the limited reasoning capability from constrained generation 2. A simple yet effective approach to tradeoff between constrained generation and reasoning. Essential References Not Discussed: + the work comprehensively cites relevant prior works. - Many reasoning approaches are proposed to allow LLMs tackle more complex tasks. The work could discuss how these reasoning-related work affect the proposed approach. Other Strengths And Weaknesses: + clear theoretical contributions + simple yet effective approaches to improve grammar-constrained LLM generation - focusing on grammars with finite languages seems to significantly limit the applicable scope of the work as the heavyweight usage of LLM is on code generation for common (e.g., rust) and domain-specific languages, which are likely all infinite. This could be the reason why the work focuses on the selected two benchmarks. Other Comments Or Suggestions: Minor typo: line 373: comparisong against --> comparison against line 498: Opneai tools --> OpenAI tools Questions For Authors: 1. Does the choice of the delimiter symbols (S1 and S2) affect the performance of CRANE? Currently, it is << and >> for the delimeters. Have the authors try other symbol choices? 2. The models evaluated are all relatively small. Have the authors try larger models which could have better reasoning capability to begin with? Does the proposed approach see diminishing gain? 3. Can the authors explain how would advances in reasoning models affect the grammar-constrained generation? In particular, the effectiveness of CRANE? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer ssfH, Thanks for the constructive feedback. We have included additional experiments with different delimiters and larger and newer reasoning models. All experiments below use the same setup as Section 5. In all cases, CRANE consistently outperformed the baselines. We will add these results in the revised version. > Q1. Experiments with different delimiters. **R1:** We use `<<` and `>>` as the delimeter for GSM-Symbolic following the original paper (see Figure 1) [1]. We will make this detail more clear in the revised version of the paper. Furthermore, to highlight CRANE's effectiveness across different delimeters, we have performed an ablation study on the GSM-symbolic dataset. We use ``## ##`` and ``!! !!`` in place of `<< >>` and report comparison against unconstrained CoT, which always performed the best among all three baselines. | Delimeter | Model | Method | Acc. (%) | Parse (%) | Tokens | | -------- | -------- | -------- |-------- |-------- | -------- | | || Unconstrained CoT | 20 | 61 | 156.13 | | |Qwen2.5-1.5B-Instruct | **CRANE** | **23** | 96 | 165.24 | | || Unconstrained CoT | 27 | 70 | 164.29 | | ``## ##``| Qwen2.5-Math-7B-Instruct | **CRANE** | **35** | 93 | 162.73 | | || Unconstrained CoT | 23 | 80 | 150.34 | | |Qwen2.5-1.5B-Instruct | **CRANE** | **26** | 92 | 162.21 | | || Unconstrained CoT | 24 | 81 | 198.53 | | ``!! !!``| Qwen2.5-Math-7B-Instruct | **CRANE** | **32** | 89 | 255.79 | | | | | | | | > Q2. Experiments with larger and advanced reasoning models. **R2:** We add experiments for newly released reasoning and larger models on GSM-symbolic. CRANE consistently outperforms the baselines on all models. | Model | Method | Acc. (%) | Parse (%) | Tokens | | -------- | -------- | -------- |-------- |-------- | | | Unconstrained w/o CoT | 18 | 89 | 21.64 | | | Constrained | 20 | 99 | 17.21 | | | Unconstrained CoT | 24 | 89 | 212.24 | | DeepSeek-R1-Distill-Qwen-7B | **CRANE** | **29** | 92 | 235.78 | | | Unconstrained w/o CoT | 12 | 77 | 29.2 | | | Constrained | 13 | 96 | 16.89 | | | Unconstrained CoT | 21 | 87 | 250.83 | | DeepSeek-R1-Distill-LLaMA-8B | **CRANE** | **31** | 92 | 268.82 | | | Unconstrained w/o CoT | 42 | 93 | 18.52 | | | Constrained | 42 | 96 | 25.62 | | | Unconstrained CoT | 42 | 95 | 157.71 | | Qwen2.5-Coder-14B-Instruct | **CRANE** | **45** | 95 | 158.54 | | | Unconstrained w/o CoT | 29 | 82 | 20.9 | | | Constrained | 30 | 91 | 30.48 | | | Unconstrained CoT | 32 | 87 | 233.42 | | DeepSeek-R1-Distill-Qwen-14B | **CRANE** | **38** | 93 | 244.98 | | | Unconstrained w/o CoT | 37 | 80 | 54.93 | | | Constrained | 38 | 91 | 34.2 | | | Unconstrained CoT | 43 | 87 | 222.62 | | QwQ-32B | **CRANE** | **46** | 88 | 237.98 | | | | | | | > Q3. Advances in Reasoning Models and Grammar-Constrained Generation **R3:** CRANE's effectiveness is maintained with the latest reasoning models. Larger, more advanced reasoning models are still susceptible to syntactic and semantic errors. Additionally, when recent reasoning models generate lengthy reasoning traces, enforcing output constraints from the start—as done in baseline constrained decoding algorithms—significantly degrades LLM performance. CRANE addresses both of these challenges by dynamically enforcing constraints. Notably, with DeepSeek-R1-Distill-LLaMA-8B, CRANE achieves the highest performance gains, improving by 10% points. Therefore, for future reasoning models that generate reasoning traces and require final results to provably satisfy structural constraints, CRANE remains a valuable tool. > Q4. Finite Grammars. **R4:** Both the gsm-symbolic and FOLIO grammars permit infinite strings. The experiments demonstrate that CRANE significantly enhances the performance of practical LLMs, even when the output grammar allows infinite strings. Finite output grammars $G$ are just a special case used to theoretically establish the negative result, highlighting the limitations of constrained decoding when using highly restrictive grammars for **any** constant-layer LLM. Overall, CRANE remains a valuable framework for improving the performance of practical LLMs, even with infinite output grammars. On the other hand, Proposition 3.3 already provides a positive result for the ideal setting. It shows that an **ideal LLM** (with perfect parameter values) can always simulate any Turing machine using constrained decoding alone. This is possible when the output grammar $G$ is augmented with additional rules, as done in $G_a$. However, the currently available LLMs may not possess the perfect set of parameters. > Q5. Typos. **R5:** Thanks for pointing this out. We will fix them in the revised version. [1] "GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models", ICLR, 2025.
Summary: The paper examines the expressivity of large language models (LLMs) under grammar constraints. It first demonstrates that there are problems LLMs can solve without constraints but fail to solve when under grammar constraints. The paper introduces the CRANE algorithm, which first generates an unconstrained reasoning chain and then imposes constraint grammar within specific generated symbols. Claims And Evidence: The claims are sound and well-supported. - The paper claims that an LLM with a constant number of layers cannot solve all problems. The authors first give an example of a st-connectivity problem, which is NL-complete but has constraints of single output digit 0 or 1. Then, they prove that the LLM can not solve it with the constraint unless the logspace-uniform class TC0=NL. - The experiments can validate the effectiveness of the proposed algorithm. The results show the proposed method significantly outperforms all baselines with different LLMs while achieving a comparable grammar pass rate with constrained grammar. Methods And Evaluation Criteria: The paper uses accuracy and the grammatically correct rate to evaluate the methods. It is sound to inspect whether the methods follow the required grammar. Theoretical Claims: The proofs of Proposition 3.1 and 3.3 are correct. Experimental Designs Or Analyses: The experimental designs are sound. The paper evaluates the method on Math and Logic datasets and compares it with baseline with constraints and COT. Supplementary Material: I have read the supplemental materials for proof and output examples. Relation To Broader Scientific Literature: The theoretical claims may inspire further works on LLM expressivity under constraints and design inference methods for certain grammars. Essential References Not Discussed: The paper cites sufficient related works. Other Strengths And Weaknesses: Weakness: - The paper provides a theoretical explanation of why COT is essential for LLMs across various tasks. However, COT has recently become a widely adopted technique. The proposed method integrates COT with output constraints and thus has limited novelty. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer jTKi, Thanks for your constructive feedback. > Q1. The paper provides a theoretical explanation of why COT is essential for LLMs across various tasks. However, COT has recently become a widely adopted technique. The proposed method integrates CoT with output constraints and thus has limited novelty. **R1:** Although Chain-of-Thought (CoT) is a popular technique, to the best of our knowledge, we are the first to formally show that constrained decoding with restrictive grammars can limit the ability of large language models (LLMs) to perform CoT. This, for the first time, provides a theoretical justification for earlier empirical evidence [1] showing that the functional accuracy of LLMs for certain tasks decreases under constrained decoding. More importantly, we show that the lack of expressive power arises only when the output grammar is overly restrictive, and this limitation can be easily mitigated by augmenting the output grammar with additional production rules for any given task. This insight leads to our proposed adaptive decoding algorithm, CRANE, which improves performance over both baseline constrained and unconstrained decoding methods. Our primary contribution is not just the combination of CoT with constrained decoding but: 1. Identifying and formally establishing the potential limitations of constrained decoding with restrictive grammars for any LLM, including more advanced models developed in the future. 2. Providing both -- a theoretical augmented grammar construction for any task (Turing Machine) and a simple, practical algorithm—CRANE—that simultaneously preserves reasoning capabilities and reduces structural errors (syntax and semantic errors) in generated outputs, thereby enhancing overall LLM performance. [1] "Let Me Speak Freely? A Study on the Impact of Format Restrictions on Performance of Large Language Models", Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track.
null
null
null
null
null
null
null
null
Collapse-Proof Non-Contrastive Self-Supervised Learning
Accept (poster)
Summary: This work studies non-contrastive self-supervised representation learning. The authors identify known collapse modes in non-contrastive SSL and propose a collapse-proof approach (CPLearn). They show that CPLearn jointly decorrelates and clusters embeddings, avoiding common collapse modes without the need for heuristics as in previous literature. Claims And Evidence: To support the claims, the paper provides mathematical proofs to specific design choices such as the loss function and the projector, and provides experiment results on four classic SSL image datasets. The proposed method (CPLearn) achieves competitive performance when the dictionary size gets large. Methods And Evaluation Criteria: The method essentially projects and soft assigns learned representations to a code book and computes the loss w.r.t the codes. The design intuitively makes sense and is supported by the provided theoretical justifications. The evaluation criteria adopt standard SSL evaluation. Experiments are conducted on four classic SSL image datasets and compared with representative baselines in the field. The practical choice of a critical hyperparameter (c) is also discussed and experimented. Additionally, the performance range for multiple runs is provided in the tables, making the evaluation solid. Theoretical Claims: I did not rigorously check the correctness of the proofs. Experimental Designs Or Analyses: Experimental designs and analyses are sound. Besides the experiments, visualization on important aspects of the learning process are also provided in Figure 3 and 4, which are informative with detailed explanations. Supplementary Material: I did not carefully review all the contents of the supplementary. I mainly reviewed figures and tables. Relation To Broader Scientific Literature: The paper provides sufficient literature discussion. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: 1. Theories rather than heuristics are presented to support the design choices and are backed by experiment results and visualization. 2. The proposed method seems straightforward to implement. The provided pseudocode is clear. Weaknesses: See questions. Other Comments Or Suggestions: None. Questions For Authors: Vicreg is also a mainstream non-contrastive SSL method, why was it not compared with in the experiments? Results on ImageNet are not provided. Are there any difficulties in applying the method to a larger scale of SSL training, for example, on ImageNet? How fast is the training convergence compared with the baselines? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the appreciation of our work and the time dedicated to review our paper. Please find below the answers to your questions: **Comparison with VicReg** We agree that VicReg is another non-contrastive self-supervised method. Given the limited amount of time available for the rebuttal, we have decided to provide additional results including experiments with a ResNet-18 on CIFAR10 and add additional baselines, such as SwAV and GEDI no gen, on ImageNet-100 to answer the requests from other reviewers (please refer to the answer to Reviewer KMvq). While we didn’t perform an explicit comparison with VicReg on ResNet-8 and ImageNet-100, it is still possible to observe that VicReg achieves similar performance to Barlow Twins when using a Resnet-18 on CIFAR-10 by looking at the results reported in the [solo-learn](https://github.com/vturrisi/solo-learn) repository, which is used in our experiments (e.g. Barlow Twins 92.1% vs. VicReg 92.1% acc@1 on CIFAR-10). This is reasonable as VicReg belongs to the family of feature decorrelation approaches like Barlow Twins. **Training Full ImageNet** The main bottleneck arises from the fact that we don’t have access to large computational resources to scale up to full ImageNet. **Training convergence** We provide a comparison of the linear probing performance over training on the experiments for ResNet-18 on CIFAR10. We visualize the top1 accuracy over training epochs both on linear and logarithmic scales. We observe that CPLearn achieves better convergence rate compared to the other approaches, thanks to the quasi-orthogonality condition of the code matrix W. The plots are available at the following anonymous [link](https://aquamarine-lula-78.tiiny.site) and will be added in the Supplementary Material together with the additional experiments done in this rebuttal (that is experiments with a ResNet-18 on CIFAR10 and additional baselines, such as SwAV and GEDI no gen, on ImageNet-100). Please let us know if you have any additional questions. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I have read the other rebuttals and the additional results, and I encourage adding these results to the manuscript during revision to make the paper more complete. I appreciate the plots on convergence speed. Overall, the paper puts emphasis on its theoretical motivation, so I am okay with the smaller-scale experiments and some of the results being comparable to baselines. I am willing to keep my original rating, and I am looking forward to seeing more evaluations on how pretrained CPLearn can perform on downstream vision tasks in the future. --- Reply to Comment 1.1.1: Comment: We appreciate reviewer’s engagement and the additional feedback. Indeed, we will include all analyses and discussions in the final paper, as they all have contributed to enhance the completeness of the work. Given the allowed extra page, we will do the following modifications to the main text: 1. Add the discussion about the choice of Rademacher distribution for the hyperdimensional computing vectors 2. Include the additional baselines (SwAV and GEDI no gen) for the experiments on ImageNet-100 3. Add the extended analysis about dimensional and intra-cluster collapses for all methods -> see [Figure](https://anonymous.4open.science/r/ICML12199-1261/Collapses.pdf) In the Supplementary, we will include all results from the rebuttal: 1. The analysis of the training convergence of all methods -> [Figure](https://anonymous.4open.science/r/ICML12199-1261/Convergence.pdf) 2. The experiments on Resnet-18 3. The time/storage analysis of all methods 4. The analysis of the $\beta$ hyperparameter for CPLearn Thank you once again for the interaction.
Summary: This paper theoretically demonstrate the conditions of avoiding four kinds of collapse in non-contrastive self-supervised learning based on the CPLearn projector design. Specifically, the authors prove that minimizing invariance to data augmentations while matching priors suffices to avoid representation and cluster collapses, whereas orthogonal frozen weights based on hyperdimensional computing, and large prediction outputs in the projector are key to avoiding dimensional and intracluster collapses. The theoretical claims are validated through experiments on benchmark datasets. Claims And Evidence: Yes, the claims are well supported. Methods And Evaluation Criteria: - The NMI and accuracy of the proposed method are tested on several benchmark datasets, whereas in Tables 1 and 2 the selection of compared methods are different. I think the authors should explain why. - The experiments are conducted on small benchmark datasets. Additional results on e.g. full ImageNet would be more convincing. - The accuracy improvements in Table seem marginal. Theoretical Claims: - I skimmed through the proofs but did not check the details. They seem to make sense. - Line 155 (left). The matrix W is drawn from the Rademacher distribution. Is there any explanations on this assumption? I suppose in practice the W matrix is learned through optimization instead of drawn from a certain distribution. Experimental Designs Or Analyses: Please refer to *Methods And Evaluation Criteria*. Supplementary Material: I skimmed through the proofs and read the additional experimental results. Relation To Broader Scientific Literature: NA. Essential References Not Discussed: NA. Other Strengths And Weaknesses: **Strengths** - The paper is overall well written with the claims well supported. - This paper established the sufficient condition for avoiding collapse through sound theoretical analysis. **Weaknesses** - The intuitive explanations of the theoretical settings and results could be enhanced, e.g. why is the W matrix assumed to follow the Rademacher distribution? - The experimental are relatively small scale, but this should not be over criticized since this paper mainly focus on the theoretical perspectives. Other Comments Or Suggestions: 1. The study of optimal $\beta$ parameter is lacking. 2. Typo in Lemma 1. $q_{ij}$ should be $q_j$? Questions For Authors: 1. Why is W assumed to follow the Rademacher distribution? 2. Why do Table 1 and Table 2 use different compared methods? 3. How much extra computational burden does the large projector bring? e.g. extra computing time/storage? 4. How do the dimensional and intracluster collapses of CPLearn compared with related methods? (Figures 12-14) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for appreciating the theoretical nature of our paper and providing constructive suggestions. **Baselines for Table 1 and Table 2** Thank you for the suggestion. We did an extra effort in the limited time available to provide additional baselines and make Table 2 more complete. Please find below the table of results with the addition of GEDI no gen (the version without the generative model) and SwAV. The best performance is highlighted in bold for each method. Overall, we observe that CPLearn achieves linear probing performance comparable to the best methods, such as SwAV and Barlow. Moreover, CPLearn significantly outperforms all other approaches in terms of clustering performance. This stems from the unique capability of the model to perform both clustering and feature decorrelation. | | **Clust** | | | | | | **Linear** | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | | **Projector** | **Small** | | | **Large** | | | **Small** | | | **Large** | | | | **Method** | **128** | **1024** | **2048** | **32768** | **65536** | **131072** | **128** | **1024** | **2048** | **32768** | **65536** | **131072** | | Barlow | 30.0% | 53.6% | **59.4%** | oom | oom | oom | 66.9% | **77.2%** | **77.2%** | oom | oom | oom | | SwAV | 28.0% | 47.9% | 51.2% | 60.7% | 60.5% | **62.8%** | 76.6% | 77.7% | **78.0%** | 76.6% | 77.7% | 77.3% | | DINO | 46.0% | 53.4% | 55.2% | 63.1% | **64.7%** | 64.3% | 71.8% | 73.6% | 73.9% | 75.1% | **76.2%** | 75.8% | | GEDI no gen | 24.5% | 36.3% | **38.8%** | 32.7% | 32.9% | 33.3% | 71.8% | **73.2%** | 72.8% | 72.9% | 72.7% | 72.8% | | CPLearn | 34.1% | 54.3% | 57.9% | 69.3% | 69.6% | **70.9%** | 70.8% | 72.8% | 70.1% | 74.5% | 74.7% | **78.0%** | **Clarity: choice of Rademacher distribution** Thank you for the curious question. There are several ways to define random code vectors in hyperdimensional computing (HC). We chose to use the multiply-add-permute encoding, which leverages Rademacher distribution to sample them. We refer you to a recent survey on HC for more details [1]. This is the simplest form of encoding equipped with simple element-wise addition and multiplication operations to perform algebraic compositions. We are going to add this in the paper and reference the survey. It is also important to mention that the exploitation of the compositional properties of HC is beyond the scope of the paper, but we are actively working on this direction. \ Also, it is important to clarify that the code matrix W is fixed throughout training. This is necessary to achieve the collapse-proof property as demonstrated by the theory in the paper (cf. Theorem 1 and related Corollaries). **Refer** \ [1] Kleyko et al. A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations. ACM Comp. Surv. 2022 **Time/storage analysis** We provide a table for the experiments with ResNet-18 trained for 1000 epochs on CIFAR10 comparing all methods in terms of storage (MB) and training time (minutes). We also provide the linear classification and clustering results for the sake of completeness. The training time does not seem to be impacted much by the larger projector. The size of the checkpoints do increase significantly. However, the relative difference would be much smaller with larger networks such as ViT-S/B/L or larger ResNet architectures as the backbone itself would be much larger compared to the additional parameters in the projector. | Method | Clust | Linear | State dict size [MB] | Full checkpoint size [MB] | Train time [min] | | - | - | - | - | - | - | | Barlow | 29.1 | 92.2 | 79 | 157 | 356 | | SwAV | 18.9 | 89.6 | 53 | 102 | 405 | | GEDI no gen | 44.6 | 80 | 47 | 140 | 353 | | Self-Classifier (c=10) | 54.2 | 78 | 47 | 140 | 347 | | Self-Classifier (c=512) | 36.9 | 84.8 | 51 | 144 | 353 | | Self-Classifier (c=16384) | 33.9 | 64.9 | 175 | 268 | 355 | | CPLearn (c=10) | 62.1 | 85.4 | 47 | 140 | 350 | | CPLearn (c=512) | 47.4 | 91.6 | 51 | 144 | 357 | | CPLearn (c=16384) | 48.2 | 91.3 | 175 | 268 | 358 | **Typo Lemma 1** Indeed that is a typo. Thank you for spotting it ! We hope we've effectively addressed your concerns and encouraged you to reconsider your rating. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' reply, but some of my concerns seem to be overlooked, e.g., the parameter study of $\beta$ and the collapse analysis compared with related works. --- Reply to Comment 1.1.1: Comment: Thank you for the additional feedback. Given the space limitations of the rebuttal and the time available, we have decided to defer answering some of the questions to a second stage. Please find detailed responses to the remaining questions below. **Analysis of $\beta$** We conducted additional experiments to evaluate the performance of CPLearn across a finer range of $\beta$ values, using ResNet-18 on CIFAR-10. The results are presented in the table below. We observed that performance improves as $\beta$ increases. However, excessively large values of $\beta$ lead to cluster collapse. This occurs because minimizing only the invariance loss ensures invariance and low-entropy predictions, but does not guarantee satisfaction of the matched prior condition described in Lemma 1. In other words, the network tends to use only a subset of codes, resulting in cluster collapse. We will include this additional analysis in the Appendix of the final paper. | $\beta$ | linear | | - | - | | 2 | 10 (cluster collapse) | | 1.5 | 10 (cluster collapse) | | 1.25 | 10 (cluster collapse) | | 1 | 91.6 | | 0.75 | 90.2 | | 0.5 | 88.1 | | 0.25 | 82.7 | | 0.1 | 75.1 | **Dimensional and intra-cluster collapse analysis for different methods** Thank you for the insightful and constructive suggestion. We have extended the analysis in Figs. 12–14 to include a comparison of different methods on CIFAR-10 and CIFAR-100. Since different SSL methods have different representation statistics, we standardize the representations (zero mean and unit standard deviation) to allow for a comparison. All results are available at the following anonymous [link](https://anonymous.4open.science/r/ICML12199-1261/Collapses.pdf). Reported metrics are different to those in Figure 12-14 (where no standardization has been used). Overall, we observe that CPLearn is the only method that remains robust to both dimensional and intracluster collapse across both datasets. These results will be included in the Appendix of the final paper. Thank you again for suggesting this valuable analysis ! Please let us know if you have any additional questions.
Summary: This paper introduces CPLearn, a novel self-supervised non contrastive approach that avoids heuristics such as stop gradient or momentum encoders for feature collapse. CPLearn does this by utilizing a projector module and a special loss function which minimizes the invariance between augmented views while enforcing a prior distribution (uniform) on the features. The authors evaluated their approach on multiple different datasets, such as SVHN, CIFAR-10, CIFAR-100, and ImageNet100. Claims And Evidence: The experiments are conducted in a non-standard manner, thus hardly justifying if the experimental results well support the claims. Methods And Evaluation Criteria: The methods and evaluation criteria make sense for this application. Theoretical Claims: I did not check the correctness of any proofs. Experimental Designs Or Analyses: The experiments are sound but some of the metrics and training configurations do not necessarily match those of previous works. Specifically, previous works typically do not use ResNet 8 or ViT models as the backbone. NMI is also not a metric typically used which makes comparison with other SSL models difficult. Supplementary Material: I have reviewed the training and testing procedures. Relation To Broader Scientific Literature: As self-supervised models have been increasingly researched, many have utilized heuristics to address problems of collapse. This paper introduces a new approach to circumvent this issue and provides mathematical justification to support it can address collapse. Essential References Not Discussed: There are not related works that are essential to understanding the key contributions which are missing. Other Strengths And Weaknesses: This paper is well-written, clearly organized, and effectively structured, with figures and algorithmic descriptions that enhance the understanding of CPLearn. The theoretical foundations are solid, and the approach is rigorously validated across multiple datasets. However, since CPLearn leverages hyperdimensional computing, a discussion on computational resource usage and potential scalability concerns would strengthen the paper since the instability observed when using larger backbones raises concerns about the practicality of the method in large-scale settings. Providing a justification for why CPLearn remains valuable despite this limitation, or exploring possible solutions, would improve the overall discussion. Other Comments Or Suggestions: The paper was well written overall, but there seems to be a grammatical issue at the end of the “Generalization on downstream tasks” subsection in the sentence “…Self-Classifier and GEDI achieve overall perform better on clustering tasks.” Questions For Authors: No questions Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for appreciating the theoretical nature of our paper and for the time dedicated to review it. Please find below the answers to the major concerns. **Experiments on ResNet-18** Thank you for the suggestion. Please find below the table with the results on CIFAR10 using ResNet-18. We used the [solo-learn](https://github.com/vturrisi/solo-learn) codebase to run the analysis with the same suggested hyperparameter configurations. Note that we reproduced all baselines ourselves and that their performance are slightly higher than what is reported in [solo-learn](https://github.com/vturrisi/solo-learn). Overall, the table highlights similar observations to the ones from experiments on ResNet-8, with CPLearn achieving comparable performance to Barlow Twins and significantly outperforming all other approaches in terms of clustering. Interestingly, both Self-Classifier and CPLearn achieve the highest clustering performance for $c=10$, corresponding to the ground truth number of classes. | Method | Clustering | Linear | | - | - | - | | Barlow | 29.1 | **92.2** | | SwAV | 18.9 | 89.6 | | GEDI no gen | 44.6 | 80.0 | | Self-Classifier (c=10) | 54.2 | 78.0 | | Self-Classifier (c=512) | 36.9 | 84.8 | | Self-Classifier (c=16384) | 33.9 | 64.9 | | CPLearn (c=10) | **62.1** | 85.4 | | CPLearn (c=512) | 47.4 | **91.6** | | CPLearn (c=16384) | 48.2 | 91.3 | **Experiments with ViT on ImageNet-100** We respectfully disagree with the reviewer. ViT is a standard backbone for benchmarking self-supervised learning algorithms, see for instance [DINO](https://github.com/facebookresearch/dino). **Typos** Thank you for spotting the typos. Indeed, the sentence “whereas SwAV, Self-Classifier and GEDI achieve overall perform better on clustering tasks” should be rephrased to “​​whereas SwAV, Self-Classifier and GEDI perform better on clustering tasks than Barlow Twins.” We hope we've effectively addressed your concerns and encouraged a reconsideration of your rating.
Summary: The paper introduces CPLearn, a non-contrastive self-supervised learning method designed to avoid common failure modes—namely, representation, dimensional, cluster, and intracluster collapses. The authors propose a simple projector design and loss function, leveraging ideas from hyperdimensional computing, that naturally encourage both decorrelation and clustering of embeddings without relying on heuristics like stop gradient or momentum encoders. The method is theoretically analyzed, and its properties are established through several results. Experimental results are conducted on datasets such as SVHN, CIFAR-10, CIFAR-100, and ImageNet-100 to test for robust generalization in clustering and linear classification tasks. Claims And Evidence: The claims are supported by theoretical proofs (Lemma 1, Theorem 1) and empirical validation. The proofs leverage convex optimization and quasi-orthogonality assumptions, and experiments demonstrate improved generalization and collapse avoidance. However, the assumption that $W^TW\approx fI$ for large $c$ (practical quasi-orthogonality) is not rigorously validated beyond empirical histograms (Fig. 3). Further, the experiments use a ResNet-8 backbone on CIFAR datasets, resulting in lower overall performance compared to more standard architectures (e.g., ResNet-18/ResNet-50). Methods And Evaluation Criteria: The methods are novel and interesting: the projector design and loss terms directly address collapse modes, and the use of large dictionaries aligns with theoretical insights. Evaluation on standard SSL benchmarks (CIFAR, ImageNet-100) is appropriate, and metrics (NMI, linear probe accuracy) are widely accepted. Nonetheless, including experiments with more standard backbones like ResNet-18 and additional baselines on ImageNet-100 would improve the empirical analysis Theoretical Claims: Lemma 1’s derivation via Lagrangian multipliers and KKT conditions is sound. Theorem 1 assumes an orthogonal $W$, which is relaxed via hyperdimensional computing principles. Experimental Designs Or Analyses: The experimental design is generally sound, with evaluations conducted on several datasets and extensive ablation studies. The analysis of dictionary size effects (using different settings for c) and the comparisons against multiple baselines are strengths. However, the use of a ResNet-8 backbone on CIFAR datasets leads to overall low performance compared to more common architectures, making it unclear whether the improvements would hold in more standard settings. Including experiments with ResNet-18 and adding baselines for larger-scale settings like ImageNet-100 would provide a more convincing picture. Supplementary Material: Yes, the proofs corresponding to Lemma 1 and Theorem 1. Relation To Broader Scientific Literature: The work bridges non-contrastive SSL (e.g., Barlow Twins) and cluster-based SSL (e.g., SwAV), addressing gaps in collapse prevention. It extends Sansone (2023) by removing the generative term and formalizing guarantees. The hyperdimensional computing connection is novel but underdeveloped. The related works section is extensive and well-contextualizes the contributions within a broad range of literature. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The paper provides an in-depth analysis of clustering-related collapse, extending prior work that mainly focused on dimensional collapse. - It offers a principled, unified design that tackles multiple collapse modes simultaneously. - The related works section is comprehensive and well-informed, covering a broad range of literature. Weaknesses: - The experiments on CIFAR-10/100 use a ResNet-8 backbone, which leads to performance levels that are significantly lower (about 25%) than what is typically observed with architectures like ResNet-18. Its unclear how these gains translate to standard and more widely used architectures. - The paper omits baseline comparisons on ImageNet-100 with methods such as GEDI and SwAV, leaving it unclear how CPLearn performs in larger-scale settings compared to the closest baselines. - Based on the ImageNet-100 results, CPLearn does not consistently outperform DINO or Barlow Twins. The projection sizes at which CPLearn shows an advantage are not reported for some of these baselines, as many values are missing due to out-of-memory errors. - The paper is very hard to follow. For instance, the introduction doesn't even define representation collapse and isn't well-written and motivated. It gives a very shallow background. - In the methods section, the authors don't give any insights into their design choices corresponding to $H$ and $P$ matrices. Why is $W$ being sampled from the Rademacher distribution? Similarly, there are no insights around the final loss term. Some of the ways in which the first term promotes invariance can be easily mentioned since it minimizes the effective cross-entropy loss. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for appreciating our work and the constructive review. **Validation of quasi-orthogonality** $W^TW=fI$ holds in probabilistic terms, formally governed by Eq. 5 in the paper, namely in expectation we have $E_W[cos(w_i,w_j)]=\delta(i-j)$, with $\delta$ being a Kronecker delta function (equal to 1 iff $i=j$ and zero otherwise), and its variance is determined by $Var_W[cos(w_i,w_j)]=1/f$. For large $f$, the variance quickly approaches zero and the equality holds almost surely. **Baselines on ImageNet-100** Please find below the table of results with the addition of GEDI no gen and SwAV. The best performance are highlighted in bold for each method. Overall, we observe that CPLearn achieves linear probing performance comparable to the best methods, such as SwAV and Barlow. Moreover, CPLearn significantly outperforms all other approaches in terms of clustering performance. This stems from the unique capability of the model to perform both clustering and feature decorrelation. | | **Clust** | | | | | | **Linear** | | | | | | | - | - | - | - | - | - | - | - | - | - | - | - | - | | **Projector** | **Small** | | | **Large** | | | **Small** | | | **Large** | | | | **Method** | **128** | **1024** | **2048** | **32768** | **65536** | **131072** | **128** | **1024** | **2048** | **32768** | **65536** | **131072** | | Barlow | 30.0% | 53.6% | **59.4%** | oom | oom | oom | 66.9% | **77.2%** | **77.2%** | oom | oom | oom | | SwAV | 28.0% | 47.9% | 51.2% | 60.7% | 60.5% | **62.8%** | 76.6% | 77.7% | **78.0%** | 76.6% | 77.7% | 77.3% | | DINO | 46.0% | 53.4% | 55.2% | 63.1% | **64.7%** | 64.3% | 71.8% | 73.6% | 73.9% | 75.1% | **76.2%** | 75.8% | | GEDI no gen | 24.5% | 36.3% | **38.8%** | 32.7% | 32.9% | 33.3% | 71.8% | **73.2%** | 72.8% | 72.9% | 72.7% | 72.8% | | CPLearn | 34.1% | 54.3% | 57.9% | 69.3% | 69.6% | **70.9%** | 70.8% | 72.8% | 70.1% | 74.5% | 74.7% | **78.0%** | **Experiments on Resnet-18** Please find below the table with the results on CIFAR10 using ResNet-18. We used the [solo-learn](https://github.com/vturrisi/solo-learn) codebase to run the analysis with the same suggested hyperparameter configurations. Note that we reproduced all baselines ourselves and that their performance are slightly higher than what reported in [solo-learn](https://github.com/vturrisi/solo-learn). Also please note that NMI is a standard and well-established metric to measure clustering performance. Overall, the table highlights similar observations to the ones from experiments on ResNet-8, with CPLearn achieving comparable performance to Barlow Twins and significantly outperforming all other approaches in terms of clustering. Interestingly, both Self-Classifier and CPLearn achieve the highest clustering performance for $c=10$, corresponding to the ground truth number of classes. | Method | Clustering | Linear | | - | - | - | | Barlow | 29.1 | **92.2** | | SwAV | 18.9 | 89.6 | | GEDI no gen | 44.6 | 80.0 | | Self-Classifier (c=10) | 54.2 | 78.0 | | Self-Classifier (c=512) | 36.9 | 84.8 | | Self-Classifier (c=16384) | 33.9 | 64.9 | | CPLearn (c=10) | **62.1** | 85.4 | | CPLearn (c=512) | 47.4 | **91.6** | | CPLearn (c=16384) | 48.2 | 91.3 | **Clarity about collapses** We are going to move Figure 1 (showcasing all collapses) in the 1st page and then reference it at line 15 to introduce it early on. **Design choices**. The activation function in $H$ is chosen in a way to ensure the validity of Theorem 1. We have an extra linear layer with batch norm in $H$ to ensure well-behaved statistics throughout training (as shown in Appendix A). $P$ is a standard linear layer with softmax activation. \ There are several ways to define random code vectors in hyperdimensional computing (HC). We chose to use the multiply-add-permute encoding, which leverages a Rademacher distribution. We refer you to a recent survey on HC for more details [1]. This is the simplest form of encoding equipped with simple element-wise addition and multiplication operations to perform algebraic compositions. We are going to add this in the paper and reference the survey. It is also important to mention that the exploitation of the compositional properties of HC is beyond the scope of the paper, but we are actively working on this direction. \ We are going to explicitly mention that Eq. 2 consists of a sum of two cross-entropy losses (this can be also seen from Algorithm 1 in the main paper). We hope we've effectively addressed your concerns and encouraged a reconsideration of your rating. **Reference** \ [1] A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations. ACM Comp. Surv. 2022
null
null
null
null
null
null
Relating Misfit to Gain in Weak-to-Strong Generalization Beyond the Squared Loss
Accept (poster)
Summary: This paper extends theoretical understanding of weak-to-strong generalization—where a strong model trained on weakly labeled data can surpass the weak model's performance—beyond regression with squared loss to general loss functions defined by Bregman divergences, including classification tasks with cross-entropy loss. It builds on prior work and generalizes their “misfit-to-gain” result by showing that for convex function classes and losses expressible via Bregman divergences, the gain in strong model performance can be quantified in terms of its disagreement (misfit) with the weak model. The paper further relaxes the convexity assumption by considering convex combinations of k strong models and shows that the misfit-gain inequality still holds up to an error term that vanishes with increasing k. Empirical validation is provided using both synthetic data and real-world NLP and vision benchmarks, demonstrating the theoretical predictions in practice. Claims And Evidence: The central claim—that performance gain in weak-to-strong generalization can be lower bounded by a misfit term (e.g., KL divergence between strong and weak models)—is clearly stated and supported by solid theoretical analysis. The authors rigorously derive this claim using generalized Pythagorean inequality for Bregman divergences and provide multiple variants (Theorems 4.1–4.3 and Corollary 4.2) to cover different settings (ideal convex classes, approximate convex combinations). Methods And Evaluation Criteria: Evaluation is carried out on relevant synthetic setups and popular benchmarks like CIFAR-10, ImageNet, BoolQ, and CosmosQA. Theoretical Claims: The derivations appear mathematically sound, and the assumptions are clearly stated (e.g., realizability, convexity, sequential consistency). The theoretical results properly extend known results and are consistent with the underlying convex geometry. Experimental Designs Or Analyses: The experimental design is solid. The synthetic setup mimics the theoretical conditions closely, and the real-world experiments provide practical relevance. Careful considerations are made in terms of controlling for realizability and analyzing the role of k. One minor weakness is that the empirical loss reduction does not always match the theoretical lower bound (misfit) exactly, but the authors acknowledge this and explain it as a consequence of non-realizability or optimization difficulties. Supplementary Material: Yes, the supplementary material was reviewed and it includes technical details for generalizing to multi-class settings, and further discussions on assumptions. Relation To Broader Scientific Literature: Not very clear. It may connect to the broader themes of co-training, disagreement-based learning. Essential References Not Discussed: I am not very familiar with this field. Other Strengths And Weaknesses: Slight discrepancy between theoretical and empirical gain. Other Comments Or Suggestions: 1. The submission is not self-contained. Without reading the original paper for misfit, the reader can only understand this paper very vaguely. 2. It is very hard to understand the content in the conclusion section. I guess it is a wrong paragraph. Questions For Authors: Could you provide more intuition or visualization around the dual space projections used in your geometric arguments? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you so much for reading our paper, and for your comments. We are glad that you like our work! We address your concerns ahead: >...submission is not self-contained While the high-level geometrical framework of viewing weak-to-strong generalization (WTSG) as “projections onto a convex space” is certainly inspired from the work of Charikar et al. 2024 (as we also generously attribute throughout our paper), we do want to emphasize that our theory in terms of Bregman divergences is a strict generalization of the results in Charikar et al. 2024. As such, their result about regression with the squared loss can be deduced as a special case of our Theorem 4.1 (as also stated in Lines 214-216). We do also set up all the preliminaries about Bregman Divergences and their geometry (Section 3) for deriving Theorem 4.1. Nevertheless, the feedback is well-taken; we will try and include a brief discussion of the Charikar et al. 2024 result in Section 3.1. >...conclusion section We will make sure to elaborate more on the future directions mentioned in the conclusion section. > Could you provide more intuition or visualization around the dual space projections used in your geometric arguments? Thank you for bringing this up. It can indeed be difficult to visualize the geometry of KL-divergence since it can behave quite differently from our familiar Euclidean geometry. Frank Nielsen has a very [nice article](https://www.ams.org//journals/notices/201803/rnoti-p321.pdf) discussing this very topic. Since we are most familiar with Euclidean geometry, it is best to appeal to those native intuitions when understanding dual space projections. More specifically, our geometric arguments require us to generalize the notion of angles so we can discuss the Pythagorean theorem in a more general setting. Bregman divergence theory gives us such a generalization, but it comes at a critical cost: asymmetry. In Euclidean geometry, we can measure angles between any two vectors by taking an inner-product. In Bregman geometry, we can only measure angles between a primal vector and a dual vector. But once we designate one vector as primal and one as dual, much of our traditional intuition from Euclidean geometry applies. When technically manipulating these expressions, we like to imagine that we are in Euclidean geometry and have access to inner products. The only restriction is we cannot swap terms in inner products or in squared $L_2$ distances. If proofs go through after taking into account these restrictions, then it is very likely they will hold in the general Bregman setting. We would be happy to include some more intuition and visualization regarding this in our final revision.
Summary: The paper characterizes the gain in weak-to-strong generalization by relating it to misfit, extending the results of Charikar et al. (2024) to general Bregman divergence. This work also weakens the condition on the strong model class, which was considered convex in Charikar et al. (2024), by allowing it to be a convex combination of functions in a general function space. Claims And Evidence: The main theorem statement is clearly presented, and both the proof in the appendix and the proof overview in the main text appear to be well-structured and understandable. Methods And Evaluation Criteria: N/A Theoretical Claims: I have checked the proof of the main results, and it appears to be correct. Experimental Designs Or Analyses: The experimental design and results appear to be well-structured. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper extends Charikar et al. (2024) by generalizing to Bregman divergence and relaxing the convexity assumption. This contribution enhances the broader understanding of weak-to-strong generalization. Essential References Not Discussed: Almost all essential references are included. However, there are some recent references and concurrent works released after the ICML submission period that are not discussed. I suggest the authors include and discuss these references in the next revision. In particular, the discussion with [2] and [3] seems crucial as they are concurrent works. [1] Medvedev, Marko, et al. "Weak-to-Strong Generalization Even in Random Feature Networks, Provably." *arXiv preprint arXiv:2503.02877* (2025). [2] Yao, Wei, et al. "Understanding the Capabilities and Limitations of Weak-to-Strong Generalization." *arXiv preprint arXiv:2502.01458* (2025). [3] Yao, Wei, et al. "Revisiting Weak-to-Strong Generalization in Theory and Practice: Reverse KL vs. Forward KL." *arXiv preprint arXiv:2502.11107* (2025). Other Strengths And Weaknesses: One crucial limitation of the work is the use of KL divergence in the opposite direction. As the authors noted, this deviates from the standard choice. Despite this, the paper's strength lies in the authors' effort to weaken many of the assumptions discussed in Charikar et al. (2024), such as the squared error and the convex strong model class assumption. See the following sections for my suggestions on improving both the strengths and weaknesses. Other Comments Or Suggestions: I acknowledge the technical challenges involved in using KL divergence in the standard direction, and the use of the opposite direction is somewhat inevitable. However, I hope the authors provide more discussion on this point, perhaps by adding a separate subsection or paragraph. Questions For Authors: Can the realizability assumption be resolved? For example, is it possible to provide a similar equation using the infimum of the error between strong models and the target model? Additionally, the main results only show that the misfit is upper-bounded by the gain in weak-to-strong generalization, while the experimental results seem to suggest a positive correlation between the two. Can we also show that the misfit is "lower-bounded" by the gain in weak-to-strong generalization, or is there a case where the misfit is small but the gain in weak-to-strong generalization is large? If the authors' response successfully addresses my suggestions or questions, I am open to increasing my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you so much for reading our paper, and for your comments. We address your concerns ahead: >...recent references, concurrent works As the reviewer notes, **the concurrent works listed were released after the ICML submission (and some fairly recently).** Nevertheless, we will be sure to cite these works and include some discussion in our final revision, since they do provide complementary perspectives. >...use of KL divergence in the opposite direction...this deviates from the standard choice...more discussion on this point Thank you for bringing this up, we want to make sure that the ideas can be understood since they are counterintuitive! Our understanding of the differences between forward and reverse KL is based on [this nice blogpost](https://tinyurl.com/2p9zr93x). Roughly we should think of forward KL as being *mass-seeking*: It prioritizes learning a distribution that covers all possibilities dictated by the teacher. On the other hand, reverse KL is *mode-seeking*: it prioritizes learning a distribution that captures the most frequent behavior in the teacher. One can notice this behavior in how the two loss functions handle a student that disagrees with the teacher and predicts 0% probability for an event: forward KL will be $+\infty$ while reverse KL will exclude that event from the loss. Conversely if the teacher predicts 0% probability for an event and the student disagrees, then forward KL will disregard that event, reverse KL will become $+\infty$. In the context of weak-to-strong generalization (WTSG), it is plausible that the teacher makes errors. Thus, we *do not* want our student to be mass-seeking. The student should be free to disagree with the teacher. It should be *mode-seeking* as this is likely where most of the signal from the teacher comes from. We will include this discussion (and possibly visualizations) on the reverse KL in the paper. The fact that the misfit term in the Pythagorean inequality manifests in the reverse direction, shows that the reverse KL is in fact a *better-suited* objective function for WTSG in classification—we show that minimizing reverse KL divergence in the WTS training procedure leads to provable and quantifiable WTSG, whereas minimizing forward KL (i.e., the standard cross-entropy) may not necessarily yield these guarantees. Our result thus shows that there is a certain “directionality” to the correlation between misfit and gain when one goes beyond the squared loss. In fact, we also see (see Rebuttal to 93MC) that the reverse KL setup yields better WTSG experimentally when compared to the standard setup---to this extent, we view the use of reverse KL as a feature rather than a limitation. >...realizability assumption We would like to emphasize that realizability of the ground truth is *not* a critical feature of Theorems 4.1 and 4.3. In particular, note that in these theorems, the weak model has no assumptions placed on it. Thus, we could just as easily take $g$ in Theorems 4.1/4.3 to be the optimal strong model that approximates (with respect to any metric) the possibly non-realizable target function, and we would obtain a misfit-gain inequality with respect to this $g$ (see also Section 5.5, where we do precisely this). This perspective is in line with that taken in Burns et al (2023) where they care about *performance gap recovered (PGR)*, which in some sense, does away with the realizability assumption. As you mention, a result similar to Theorem 2 in Charikar et al (2024), which incorporates the "distance" of the target function from the strong class into the bound, would be interesting. This needs a little more care in the KL-divergence setting since we don’t have a triangle inequality. We left this result out of our paper to focus more on our Theorem 4.3, which generalizes the misfit-gain inequality to *non-convex* classes of strong models. However, in the reference [3] that you mention, this generalization is worked out. >...lower-bounding misfit by gain Lower-bounding the misfit is tricky because we don’t have a triangle inequality for KL divergence. However, in reference [3], such a lower bound was proven, albeit at the cost of a potentially large constant. Without some regularization assumptions, we suspect it would be hard in general to control this constant. --- Rebuttal Comment 1.1: Comment: Thank you for the response. It successfully addresses my concerns. I believe that adding further discussion, particularly regarding the use of reverse KL and connections to concurrent work, would further strengthen the manuscript. Accordingly, I have increased my score to 3 (weak accept).
Summary: This paper generalizes the conclusion that "performance gain correlates with misfit in weak-to-strong generalization" from prior work on squared loss to Bregman divergence loss. It provides empirical evidence through experiments on synthetic tasks, language tasks, and vision tasks. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The datasets and benchmarks are appropriate and make sense for the problem. Theoretical Claims: I didn’t go through all the proofs in detail, but the theoretical results seem convincing to me. Experimental Designs Or Analyses: The experiments are sound and valid. Notably, compared to prior work (Charikar 2024), they additionally include experiments with language and vision tasks. Supplementary Material: I skimmed through the supplementary material but did not verify all the mathematical details. Relation To Broader Scientific Literature: This paper is particularly related to Charikar (2024). It generalizes the prior results on squared loss to Bregman divergence loss, making them applicable to classification tasks. Additionally, compared to Charikar (2024), which conducted synthetic experiments and experiments on molecular predictions, this paper includes experiments involving language and image tasks to validate its results. Essential References Not Discussed: I didn't notice any. Other Strengths And Weaknesses: Strengths: The theoretical results seem solid and effectively generalize prior work. Additionally, the paper includes experiments on a larger scale and in more commonly relevant domains (language and vision) compared to the initial work on misfit-gain relation. Weaknesses: My main concerns and questions relate to the practical aspects, which are outlined in the questions section. Other Comments Or Suggestions: I don't have other comments. Questions For Authors: 1. While I appreciate the theoretical contribution of the paper—particularly in how it generalizes the conclusion in Charikar (2024) from squared loss to Bregman divergence, which is a non-trivial task—I still wonder whether it provides any additional insights beyond confirming the correlation between misfit and gain. 2. The authors mention that their theory regarding convex combinations of k strong models provides a "recipe" for minimizing the discrepancy between the weak supervisor and strong student, but “in the opposite direction”. While I understand the mathematical formulation, what is the intuitive interpretation of this being in the opposite direction? How does this impact the results differently compared to simply using the regular loss (i.e., placing the supervisor’s output as the first argument)? Would the standard approach lead to worse results and why? 3. In the experiments (e.g., those in Section 5.2), the authors mention using the reverse KL divergence objective for consistency with the theory while acknowledging that this choice might negatively affect performance. Does this suggest certain impractical aspects of the theoretical framework? Specifically, does the theory require an experimental setup that is not practical in order to produce results that align with its predictions? If the experiments were conducted in a more standard manner, would the same trends and conclusions still hold? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you so much for reading our paper, and for your comments. We are glad that you like our work! We address your concerns ahead: >1...additional insights beyond confirming the correlation between misfit and gain. >2..intuitive interpretation for reverse KL... We address both 1) and 2) in our response to Reviewer 7oRJ. Please also see our response to Reviewer fGrp for some intuition on the geometry induced by Bregman Divergences. >3...impractical aspects...If the experiments were conducted in a more standard manner, would the same trends and conclusions still hold? We thank the reviewer for prompting us to do this comparison; we believe the obtained results lend more conceptual support to our theory. For the experiments in Section 5.2, we ran a comparison where we perform linear-probing for the strong model in the “standard” manner. That is, in the weak-to-strong training phase, instead of finetuning a convex combination of $k=100$ linear layers on the reverse KL objective, we finetune these on the forward cross-entropy objective XE(weak, strong) (i.e., the natural direction), and compare the findings to the numbers presented in Section 5.2. Note that the only difference between the settings now is the objective that the linear layers are finetuned on. The table below shows the comparison for the test accuracy of the strong model, as well the XE loss between the ground truth and the strong model. ||cosmos_qa||amazon_polarity||boolq||sciq||CIFAR10||ImageNet|| |---|---:|---|---:|---|---:|---|---:|---|---:|---|---:|---| ||Forward KL|Reverse KL|Forward KL|Reverse KL|Forward KL|Reverse KL|Forward KL|Reverse KL|Forward KL|Reverse KL|Forward KL|Reverse KL| |Test Accuracy|0.6378|**0.6407**| 0.8963|**0.8984**|0.6140 |**0.6141**|0.6171|**0.6254**|0.8981|**0.9005**|**0.7133**|0.6986| |XE(gt,strong)|0.6147|**0.6141**|0.2788|**0.2570**|**0.6464**|0.6476|0.6439|**0.6436**|0.4052|**0.3361**|**1.4042**|1.4742 The comparison is indeed quite interesting—we can see that the strong model that is finetuned on the **reverse KL objective shows better final test accuracy (i.e., better WTSG) for nearly all the datasets!** Namely, we do not really see significant performance degradation (in fact, we see improvement in nearly all cases) with reverse KL, when compared to the standard setup. Moreover, we also computed the gain and misfit terms in the Pythagorean inequality, where for the reverse KL experiment, we compute the reverse misfit that we propose (i.e., KL(strong, weak), whereas in the standard forward XE experiment, we compute the the misfit as XE(weak, strong) (i.e., the “natural” misfit that one might consider). ||cosmos_qa||amazon_polarity||boolq||sciq||CIFAR10||ImageNet|| |---|---:|---|---:|---|---:|---|---:|---|---:|---|---:|---| ||Forward KL|Reverse KL|Forward KL|Reverse KL|Forward KL|Reverse KL|Forward KL|Reverse KL|Forward KL|Reverse KL|Forward KL|Reverse KL| |Gain|0.0368|0.0372|0.0717|0.0934|0.0706|0.0694|0.0264|0.0266|0.1772|0.2447|0.5217|0.4517| |Misfit|0.6093|0.0613|0.3717|0.1444|0.6319|0.0980|0.6558|0.0961|1.1024|0.3185|3.9392|1.6401| We can clearly see that if one were to run the standard WTSG setup, the Pythagorean inequality is completely off (gain and misfit don’t quantitatively align), whereas the reverse misfit is more representative of the gain, as confirmed by our theory as well. Again, this indicates a clear “directionality” in the Pythagorean inequality for WTSG in the classification setting! We would be happy to include this discussion in the final version. It is plausible that in practice, running standard forward XE(weak, strong) minimization might lead to a minimizer that is in fact close to the minimizer of the reverse KL(strong, weak) that we propose. It would be an interesting future direction to precisely characterize scenarios when this is the case. Finally, we want to emphasize that in our experiments, we only do weak-to-strong finetuning of the last linear layer. In contrast, the best numbers reported in the original WTSG paper by Burns et al. 2023 correspond to full finetuning (on the forward KL objective) of all the weights of the architecture, and the WTS accuracies are hence naturally better. The primary concern we point out about reverse KL minimization is the non-convexity of the objective, which can possibly cause issues for first-order methods. However, if one is finetuning all the weights in the architecture anyway, this concern is not central, since in such a case, even the standard forward cross-entropy objective becomes non-convex.
Summary: This paper generalizes the recent theoretical analysis of weak-to-strong generalization beyond squared loss regression to arbitrary Bregman divergence-based loss functions in the fixed-representation finetuning setting when the strong class is convex. - For classification tasks, the authors propose to minimize the expected KL divergence between the strong model’s output and the weak labels by optimizing a convex combination of $k$ logistic regression layers on top of the strong model representation. With this special classification objective, it is shown that the gain in weak-to-strong generalization is at least the KL divergence between the strong model and the weak model at the conclusion of weak-to-strong training (i.e. the misfit). - Empirical results on synthetic, NLP (BoolQ, SciQ, CosmosQA, Amazon Polarity), and vision datasets (CIFAR-10, ImageNet) support the theoretical predictions, demonstrating a correlation between misfit and performance gain. Claims And Evidence: Yes, claims made in the paper are well-structured, supported by clear theoretical analysis and convincing empirical evidence. Methods And Evaluation Criteria: Yes, the theoretical tools used in the paper are reasonable. Theoretical Claims: I checked all the theoretical statements and proof sketches in the main text, but not all the proofs in the appendix. The main theoretical results in the paper seem reasonable and well-structured. Experimental Designs Or Analyses: I checked the experiments in the main text but not the details in the appendix. The experiments provide sufficient evidence to support the theoretical claims. Supplementary Material: Not applicable. Relation To Broader Scientific Literature: The paper builds upon recent theoretical insights by Charikar et al. (2024), which first established a misfit-based characterization of weak-to-strong generalization specifically for regression with squared loss. By employing concepts from information geometry and Bregman divergences, the authors extend this analysis to classification tasks. Essential References Not Discussed: To my knowledge, the paper discussed the essential references in the field. Other Strengths And Weaknesses: Strengths: - The analysis is well-structured. - The proposed objective seems interesting (but could be better motivated). - The empirical evidence is extensive for a theoretical paper. Weaknesses: - The intuition behind the key contribution does not seem to be clearly explained in the introduction, lines 071-094. In this paragraph, the authors described a special classification objective whose corresponding misfit is connected to the gain in weak-to-strong generalization. However, the objective is described purely verbally, with neither (a sketching of) the formulation nor any intuitions why the formulation works. - A major concern that push me toward a negative rating is that the contributions seem marginal. 1. The main contribution of this work is extending the analysis in Charikar et al. (2024) on the connection between misfit and weak-to-strong generalization gain to the classification setting. The main results in the paper either come with strong assumptions like convexity or are built upon special objectives that could be better motivated. 2. From a high-level perspective, I appreciate the neat theoretical connection between misfit and weak-to-strong generalization gain as unveiled in Charikar et al. (2024). However, beyond a different objective and the associated theoretical tools, I was having difficulty finding new insights in this work, e.g., why weak-to-strong generalization happens in the classification setting; whether the mechanism is different from the regression setting in Charikar et al. (2024); and how to quantify the gain in weak-to-strong generalization when taking training configurations (like sample complexities, learning dynamics, and architectures) into consideration. Other Comments Or Suggestions: Some minor questions: - In the abstract, line 24, the notion of "strong class" is not clear in the context. - Line 260 (left), what's $\mathcal{F}^*$? Questions For Authors: Major questions are raised in the "Other Strengths And Weaknesses" section. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you so much for reading our paper, and for your comments. We address your concerns ahead: > ...intuition behind key contribution not clearly explained Thank you for bringing this up, we want to make sure that the ideas can be understood since they are counterintuitive! Our understanding of the differences between forward and reverse KL is based on [this nice blogpost](https://tinyurl.com/2p9zr93x). We want to note that, **after the ICML deadline**, an [independent work](https://tinyurl.com/mr2jjbr8) also highlighted the value of reverse KL divergence. Roughly we should think of forward KL as being *mass-seeking*: It prioritizes learning a distribution that covers all possibilities dictated by the teacher. On the other hand, reverse KL is *mode-seeking*: it prioritizes learning a distribution that captures the most frequent behavior in the teacher. One can notice this behavior in how the two loss functions handle a student that disagrees with the teacher and predicts 0% probability for an event: forward KL will be $+\infty$ while reverse KL will exclude that event from the loss. Conversely if the teacher predicts 0% probability for an event and the student disagrees, then forward KL will disregard that event, reverse KL will become $+\infty$. In the context of weak-to-strong generalization (WTSG), it is plausible that the teacher makes errors. Thus, we *do not* want our student to be mass-seeking. The student should be free to disagree with the teacher. It should be *mode-seeking* as this is likely where most of the signal from the teacher comes from. We will include this discussion (and possibly visualizations) on the reverse KL in the paper. >...strong assumptions like convexity...new insights beyond Charikar et al. 2024...sample complexity, dynamics While the high-level framework of viewing WTSG as “projections onto a convex space” is certainly inspired from the work of Charikar et al. 2024, we want to highlight that our theory in terms of Bregman divergences is more complete, in that it recovers their results for regression as a special case, and also paves the way for other settings. Furthermore, as described in the paper, several things become more complicated while applying this theory to non-symmetric Bregman divergences. We believe that our technical contributions towards fixing these yield insights that are of interest by themselves. First, the fact that the misfit term in the Pythagorean inequality manifests in the reverse direction, shows that the reverse KL is a *better-suited* objective function for WTSG in classification—we show that minimizing reverse KL divergence in the WTS training procedure leads to provable and quantifiable WTSG, whereas minimizing forward KL (i.e., the standard cross-entropy) may not necessarily yield these guarantees. In fact, we also see (see Rebuttal to 93MC) that the reverse KL setup yields better WTSG experimentally when compared to the standard setup. Our result thus shows that there is a certain “directionality” to the correlation between misfit and gain when one goes beyond the squared loss. Second, our guarantee (Theorem 4.3) for reverse KL minimization, **holds even if the function space is non-convex** (unlike the work of Charikar et al. 2024). Our solution to handle non-convexity, namely optimizing over a convex combination of $k$ functions so as to approximate the convex hull of the function space—is both practical (as evidenced by the experimental results), and also allows us to derive concrete theoretical guarantees. The error term in the guarantee becomes smaller as $k$ increases, matching the intuition that the convex hull is being approximated better. **Importantly, this solution allows us to optimize over convex combinations of functions from an arbitrary (possibly non-convex) function space, and not just linear/logistic heads!** This opens the gates for obtaining WTSG guarantees with *full-finetuning*, and not just linear probing. In particular, the same bound holds even if we finetune all the weights of a convex combination of neural networks—such freedom is not afforded in the theory of Charikar et al. 2024. Our solution requires deriving an insightful connection with existing density estimation methods, and requires non-trivial adaptations to our setting. We view this contribution to be novel, and possibly also as a useful modeling paradigm in other learning settings, especially given that our experimental numbers improve with reasonably large values of $k$. Finally, sample complexity bounds are a straightforward adaptation from Charikar et al. (2024), and we wanted to keep the paper focused on new contributions. As for learning dynamics, for our initial study, we assume that the $\arg\min$ of the reverse KL can be approximated well (as also suggested by our experiments); nevertheless, this is an interesting future direction. >...what's $\mathcal{F}*$ It is the dual space of $\mathcal{F}$ (see Line 123). --- Rebuttal Comment 1.1: Comment: Thanks for the responses. I partially agree with the authors that, from a technical perspective, extensions beyond convex functions and linear probing are valuable in some sense. However, subjectively, I don't find these extensions alone to be strong enough for an ICML paper. I also agree that the new perspective on forward and reverse KL could be insightful. But I don't see how this new perspective can be naturally integrated into the original story based on the special objective proposed. Overall, I believe the paper would benefit from a major revision of its main messages and storyline. Therefore, I will maintain my current score.
null
null
null
null
null
null
On Exact Bit-level Reversible Transformers Without Changing Architecture
Accept (poster)
Summary: The paper proposes BDIA-transformer, a novel approach combining the bidirectional integration approximation (BDIA) method and activation quantization to achieve exact bit-level reversibility in standard transformer architectures. This combination significantly reduces memory usage during training via online back-propagation, where activations are recomputed on-the-fly rather than stored. Additionally, random sampling of the hyperparameter $\gamma \in ${-0.5, 0.5} introduces regularization that empirically improves generalization. Experiments on ViT-based image classification (CIFAR-10/100), GPT2-based language translation show BDIA-transformer achieving better validation performance and reduced training memory compared to baseline transformers. Claims And Evidence: Most claims concerning improved validation accuracy, reduced memory consumption, and generalization benefits are supported by experiments. While implicitly verified, it would be great to update Fig 2. to add the reconstruction errors when using a model following Equation 24 in order to confirm the validity of the exact byte inversion. Methods And Evaluation Criteria: The methods and evaluation criteria generally make sense for the demonstrated applications. CIFAR10/100, GPT2 fine-tuning, and English-French translation tasks are adequate benchmarks to show memory efficiency and regularization effects. However, these memory considerations often arise in large-scale settings where the model's capacity is the limiting factor, but the experiments are done in a context where data is the limiting factor (overfitting setting). Theoretical Claims: No formal proofs were provided to substantiate theoretical claims, particularly regarding exact reversibility under quantization, leaving this as a significant gap. Experimental Designs Or Analyses: The experimental designs appear valid, clearly illustrating the benefits of BDIA-transformer under limited data conditions. Nonetheless, an analysis of scenarios where model capacity, rather than data quantity, limits performance would help clarify the applicability and limitations of the proposed regularization method. While large-scale experiments are not expected, toy experiments could give a good picture of how the method trades model capacity to improve generalization. Supplementary Material: No supplementary material was provided; thus, no additional experimental details or source code could be reviewed. Relation To Broader Scientific Literature: The paper positions itself appropriately within literature on reversible neural networks and activation quantization, clearly differentiating its unique combination of these two ideas from existing approaches like RevNet and RevViT. Essential References Not Discussed: To my knowledge, the paper adequately cites and discusses relevant existing literature; Other Strengths And Weaknesses: Strengths: - Novel combination of reversible architectures with quantization for memory efficiency and regularization. - Empirical validation across multiple popular tasks demonstrates general applicability. - The fact that it minimally changes the architecture allows the use of this method for finetuning, which is a creative and powerful use case. Weaknesses: - Missing formal proofs regarding exact reversibility under quantization. - No detailed exploration of how quantization precision (parameter l) impacts performance and memory. It would help to disentangle the benefits from reversibility and the benefits from quantization. - Limited experimental settings that explicitly test the potential for underfitting or performance degradation when model capacity is constrained. Other Comments Or Suggestions: - Clarify the theoretical underpinnings of the reversibility claim explicitly. - Include sensitivity analysis regarding quantization precision. - Consider additional experiments explicitly varying model size and data to highlight potential trade-offs. Questions For Authors: - I am curious about how sensitive your method is to the choice of quantization precision (l). Especially how this precision affects both performance and reversibility? - I don't see particular counter-indications to applying this strategy to blocks other than attention. Can similar observations be made for convolutional networks? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: The authors thank all four reviewers for their appreciation of the novelty, simplicity, and effectiveness of our BDIA training technique for transformers. Notably, reviewer 7zwY states that __the novelty and effectiveness of the approach make this a strong paper__. Reviewer tmy4 states __minimally changes the architecture allows the use of this method for finetuning, which is a creative and powerful use case__. We would greatly appreciate the reviewers checking our detailed rebuttal responses and reconsidering their scores for our paper. __(1) Clarify the theoretical underpinnings of the reversibility claim explicitly.__ In the revision, we will introduce a proposition and a proof in Subsection 4.3 to explicitly formulate the reversibility claim. Basically, in the proof, we will show that with quantization and the side information $s_{k-1}$, $x_{k-1}$ can be reconstructed exactly from $x_k$ and $x_{k+1}$ in online back-propagation, which only involves simple algebra. The side information $s_{k-1}$ essentially accounts for the binary quantization loss introduced by the configuration $\gamma_k\in \\{\pm 0.5\\}$. __(2) Include sensitivity analysis regarding quantization precision__ As also suggested by reviewers 7ZwY and 8yrY, we fully agree that conducting the ablation study on the impact of different quantization levels would be really helpful to understand the behavior of BDIA. A complete sensitivity analysis would require additional investigation, but we have performed additional experiments for testing the performance of both ViT and BDIA-ViT for the quantization levels of $\\{5, 0, -2\\}$ over CIFAR10. In principle, the quantization level of $l=-2$ provides very coarse quantization effect and should lead to degraded performance. As shown in the table below, when the quantization level goes from 5 to -2, the performance of ViT and BDIA-ViT decreases as expected. One can also observe from the table that when $l\in \\{5, 0\\}$, BDIA-ViT performs consistently better than ViT. For the case of $l=-2$, ViT performs slightly better than BDIA-ViT. This likely is because the very coarse quantization dominates the performance and negatively affects the BDIA-training technique. |quantization level |5|0|-2| | ------------------------ | ----- | --- | --- | |ViT|87.90|87.76|__84.31__| |BDIA-ViT | __89.50__ |__88.47__ |84.16| __(3) Regarding the relationship between quantization precision and reversibility.__ In principle, the quantization precision is independent of the reversibility of BDIA-transformer. The reversibility of BDIA-transformer is guaranteed if the side information is stored for any particular quantization level. As we mentioned above, the quantization precision has a noticable impact on the validation performance. We will clarify the above point in the revision. __(4) Consider additional experiments explicitly varying model size and data to highlight potential trade-offs__ Many thanks for the comments. We have conducted additional experiments by varying the number of transformer blocks in ViT and BDIA-ViT when trained over CIFAR10. As shown in the table below, BDIA-ViT performs consistently better than ViT for different numbers of transformer blocks. We can also observe that increasing the number of transformer blocks does not lead to better validation performance for a fixed dataset. Neural ``scaling" law suggest that the model size should scale along with the increasing size of the dataset in a log-log relationship. We are not sure if we understand your suggestion on varying data for the additional experiments. It would be great if you could further elaborate on the above point so that we can perform appropriate additional experiments. | number of transformer blocks | 6|10| |-----------------------------------|----|----| |ViT|88.15 |88.02| |BDIA-ViT|__89.10__|__89.08__| __(5) Regarding the applicability of BDIA to other DNN blocks__ Reviewer 8yrY suggested us to consider other transformer variants, such as linear transformer. Yes, we agree with your observation, although of course it would be necessary to empirically verify this. We have evaluated the BDIA technique for training vision-linformer over CIFAR10 by adopting the open source from https://github.com/pranavphoenix/VisionXformer/tree/main. Basically, we replaced the attention block in ViT by the linear attention block from the above open source in doing the experiment. The table below shows that BDIA again improves the validation accuracy considerably. Due to very limited time for rebuttal, we don't have time to test other DNN blocks. It would be of great interest to further investigate the effectiveness of BDIA for other DNN blocks such as longformer, Mamba blocks, and convolutional networks in the future. We hope our work provides a motivation for the above research directions. ||CIFAR10| |---------------------|------| |vision-linformer|84.16| |BDIA-vision-linformer|__86.6__| --- Rebuttal Comment 1.1: Comment: Points 1,3 and 5: I do not doubt that the statement is true, but having such proof would help the reader understand the roots of the method. Especially, a reader (like me) would have understood that the questions in 3) and 5) are a bit irrelevant by looking at such proof's assumptions. Point 2: I would like to apologize for the wrong phrasing of my question: by performance and memory, I meant performance in terms of speed (as quantization is often used to speed up training). Without any code nor any appendix to look at, I assumed that the quantization was done through a framework like [torch.amp](https://pytorch.org/docs/stable/amp.html), which made me think that quantization directly induces a speedup and a memory gain (by switching from float32 to bfloat16 or float8). However, after a more careful reading of eq.21 it seems like only the activations at the beginning/end of each transformer block are quantized (but the computation within each block is still full precision as $Q[(1-\gamma)x_k + (1+ \gamma)h_k(x_k)] \neq Q[(1-\gamma)x_k] + Q[(1+ \gamma)h_k(x_k)]$). Since those activations are not stored in online backprop, I did not understand that quantization has no impact on training speed/memory consumption. Point 4: My question was around the observation that experiments are done in the context where you want to mitigate overfitting; however, large-scale settings are more about mitigating underfitting. The second regime can be explored by reducing the model size. I will keep my rating to weak accept; I would not oppose this paper's acceptance as I do agree with the novelty and effectiveness of the approach. On the other side, I will not raise my grade because the *paper writing* falls short of the strong potential of *his idea*: with the total absence of appendices and code, the reader is forced to guess important aspects of the paper (discussion about point 2 is a good example of such guess). In short, I feel bad seeing this paper being weakly accepted, whereas this paper would be strongly accepted with sufficient appendices and available code. --- Reply to Comment 1.1.1: Comment: Many thanks for the valuable feedback for us to improve the paper. (1) For your information, we have applied BDIA in training resnet networks over CIFAR10 and obtained positive results again in terms of validation performance. || validation accuracy | |----------------------------------------------------|------| |ResNet with 15 residual blocks|92.93| |BDIA-ResNet with 15 residual blocks|__93.61__| (2) As suggested by reviewer BqRX, we have also studied the impact of dropout rates $\\{0.0, 0.1, 0.2\\}$ on the validation performance of BDIA-ViT over CIFAR10. The results for dropout rate of 0.1 are taken from the paper. The table below demonstrates that the dropout and BDIA can work together to improve the validation performance. |dropout|0.0|0.0| |---------|----|---| | |ViT|BDIA-ViT| |CIFAR10|86.27|__89.20__| |CIFAR100|59.13|__64.45__| |dropout|0.1|0.1| ||ViT|BDIA-ViT| |CIFAR10|88.15|__89.10__| |CIFAR100|61.86|__66.09__| |dropout|0.2|0.2| ||ViT|BDIA-ViT| |CIFAR10|87.24|__88.22__| |CIFAR100|61.68|__64.24__|
Summary: The paper introduces the BDIA-transformer, a novel reversible transformer that maintains the standard transformer architecture during inference while leveraging a technique called bidirectional integration approximation (BDIA) for reversibility. The key idea is to treat each transformer block as an Euler integration approximation for solving an ordinary differential equation (ODE) and then apply BDIA to achieve reversibility. Additionally, activation quantization is incorporated to ensure exact bit-level reversibility. Experimental results on image classification, natural language generation, and language translation tasks show that BDIA-transformers outperform conventional transformers in validation accuracy while significantly reducing training memory. Claims And Evidence: The paper claims that BDIA-transformers achieve exact bit-level reversibility without altering the standard transformer architecture during inference. The proposed method reduces memory consumption in training while improving validation accuracy through model regularization. Experimental evidence supports these claims, showing improved validation performance across multiple tasks and datasets. The memory savings are attributed to online back-propagation enabled by reversibility, and the performance gain is linked to the ensemble effect of ODE solvers induced by the stochastic nature of the hyper-parameter. The claim that BDIA-transformers outperform RevViT is also supported by empirical results, though the performance gain varies across datasets. Methods And Evaluation Criteria: The evaluation is conducted on tasks including image classification, natural language generation, and language translation. The metrics used include validation accuracy, training loss, and peak memory consumption. The use of CIFAR-10, CIFAR-100, and the E2E dataset provides diverse benchmarks to assess the generalization of the proposed method. The evaluation criteria are appropriate for demonstrating the advantages of memory efficiency and performance improvements in BDIA-transformers compared to standard transformers and RevViT. Theoretical Claims: The theoretical claim is supported by the mathematical formulation of the update expressions. The derivation of BDIA update rules and the use of activation quantization ensure that each forward and backward step can be losslessly reconstructed. The use of lightweight side information per transformer block to counteract binary quantization loss further strengthens the theoretical soundness. Experimental Designs Or Analyses: Experimental results compare the proposed model against baseline transformers and RevViT across various tasks. Multiple datasets and evaluation metrics are adopted. The training setup, involving multiple experimental repetitions, mitigates the effect of randomness. The inclusion of ablation studies exploring the impact of different hyper-parameters and quantization settings provides additional insights into the model's behavior. Supplementary Material: I checked the detailed training setups and additional experimental results in the appendix. Relation To Broader Scientific Literature: The paper builds on prior work in reversible neural networks and ODE-based modeling, extending these ideas to transformers. References to RevNet, neural ODEs, and RevViT contextualize the contributions within the broader literature. The connection to diffusion models and integration approximations further enriches the theoretical foundation. Essential References Not Discussed: A deeper discussion on recent advances in memory-efficient transformer training techniques would strengthen the paper quality. Notably, works on linear attention mechanisms and memory-efficient self-attention could provide alternative perspectives on reducing memory consumption. Other Strengths And Weaknesses: Strengths: - Novel use of BDIA in transformers, achieving exact bit-level reversibility without modifying the inference architecture. - Significant memory savings and improved validation performance. - Thorough experimental evaluation across diverse tasks. Insightful ablation studies on the hyper-parameter and activation quantization. Weaknesses: - Increased memory consumption due to storing lightweight side information. - Dependence on quantization introduces complexity in implementation. Other Comments Or Suggestions: - Can you clarify the impact of different quantization levels on performance? - A discussion on the trade-off between memory savings and computational overhead would be helpful. Questions For Authors: 1. How does the choice of quantization level affect the model's performance and memory savings? 2. Can BDIA be applied to other transformer variants, such as linear transformers or long-form transformers? 3. What are the potential limitations of BDIA when scaling to large models with more than 100B parameters? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: The authors thank all four reviewers for their appreciation of the novelty, simplicity, and effectiveness of our BDIA training technique for transformers. Notably, reviewer 7zwY states that __the novelty and effectiveness of the approach make this a strong paper__. Reviewer tmy4 states __minimally changes the architecture allows the use of this method for finetuning, which is a creative and powerful use case__. We would greatly appreciate the reviewers checking our detailed rebuttal responses and reconsidering their scores for our paper. __(1) Regarding the impact of different quantization levels on performance.__ We have performed additional experiments to test the performance of both ViT and BDIA-ViT for the quantization levels of $\{5, 0, -2\}$ over CIFAR10. In principle, the quantization level $l=-2$ provides very coarse quantization effect and should lead to degraded performance. As shown in the table below, when the quantization level goes from 5 to -2, the performance of ViT and BDIA-ViT decreases as expected. One can also observe from the table that when $l=5, 0$, BDIA-ViT performs consistently better than ViT. For the case of $l=-2$, ViT performs slightly better than BDIA-ViT. This could be because the very coarse quantization dominates the performance and negatively affects the BDIA-training technique. |quantization level |5|0|-2| | ------------------------ | ----- | --- | --- | |ViT|87.90|87.76|__84.31__| |BDIA-ViT | __89.50__ |__88.47__ |84.16| __(2) Regarding a discussion on the trade-off between memory savings and computational overhead__ In the revision, we will add a table indicating the average running time per epoch for training ViT, BDIA-ViT with online backpropagation, and BDIA-ViT without online backpropagation over CIFAR10. We will then explain that BDIA-ViT with online backpropagation saves training memory and improves validation performance at the cost of slightly longer training time than ViT. We will also include additional figures in the revision showing the convergence curves in terms of wall-clock time. __(3) Regarding the impact of quantization level on memory savings__ Firstly, we note that the reversibility of BDIA-transformer is guaranteed when the side information is stored for any quantization level in the forward pass of the model. The memory for the side information is fixed for a particular transformer model and is independent of the quantization level. Secondly, in principle, when a coarse quantization is performed to the intermediate activation values across the transformer blocks, less memory is needed for the quantized activation values. Therefore, BDIA-transformer saves more memory when a coarser quantization is performed. However, the quantization cannot be too coarse as the performance of BDIA-transformer will be degraded, as shown in the above table. __(4) Can BDIA be applied to other transformer variants?__ Many thanks for the suggestions. We have evaluated BDIA for training vision-linformer over CIFAR10 by adopting the open source from https://github.com/pranavphoenix/VisionXformer/tree/main. Basically, we replaced the attention block in ViT by the linear attention block from the above open source in doing the experiment. The table below shows that BDIA again improves the validation accuracy. We believe that BDIA can be applied to other transformer variants. Due to limited time for rebuttal, we did not have time to test other attention variants. It would be of great interest to further investigate the effectiveness of BDIA for those variants in the future. ||CIFAR10| |---------------------|------| |vision-linformer|84.16| |BDIA-vision-linformer|__86.6__| __(5) What are the potential limitations of BDIA when scaling to large models?__ These large models are rumored to require thousands of GPUs to train, and we will not have access to such resources, so our response is speculative. Our hypothesis is that a large model usually has many transformer blocks. For example, Google states that GPT-4 has 120 transformer blocks. In principle, the regularization impact of the BDIA training technique increases along with the increasing number of transformer blocks since the number of ODE solvers parameterized by $\\{\gamma_{k}\\}_{k=1}^{K-1}$ in BDIA increases exponentially with $K$. We hypothesize that the validation performance of the large model would be improved noticeably by using the BDIA training technique. One thing that is not yet clear to us is if BDIA is compatible directly from an engineering viewpoint with the distributed data parallel (DDP) framework, which is widely used for training large transformer models across many GPUs. Additional engineering work might be required for solving the above potential issue. __(6) Regarding literature review on linear attention mechanisms and memory-efficient self-attention__ We will update the introduction accordingly to reflect the works you mentioned.
Summary: This paper introduces BDIA-transformer, an exact bit-level reversible transformer that maintains the standard architecture for inference while reducing memory consumption during training. The approach adopts bidirectional integration approximation (BDIA), allowing the authors to consider each transformer block as an exactly reversible flow (once corrected by a 1-bit stored value at each activation layer) -- a framing that is theoretically and practically important in the study of ordinary differential equations. This reversibility provides a mechanism to recompute activations during the backwards pass when training the DNNs. More specifically, the key contributions of the paper are: (1) introducing a random hyperparameter $\gamma \in [-0.5,0.5]$ per transformer block per training sample, which regularizes the model by effectively training an ensemble of ODE solvers; and (2) implementing activation quantization to enable exact bit-level reversibility. During inference, $\gamma$ is set to its expectation (zero), which reduces the architecture to a standard transformer with only activation quantization applied. Claims And Evidence: The authors claim their approach enables exact bit-level reversibility. This is supported by the mathematical formulation, though direct empirical verification of this property is somewhat limited in the experimental section. The paper claims the core inference architecture is preserved by their technique. This is adequately supported by equations demonstrating how $E(\gamma)=0$ results in a standard transformer update with only quantization added. The paper claims BDIA-transformer reduces memory consumption during training. This is supported by memory measurements shown in Table 1, which demonstrates reduced peak memory usage compared to standard transformers. The authors claim BDIA-transformer outperforms conventional transformers due to regularization effects. The evidence in Figures 3-6 and Tables 1-3 supports this, showing improved validation performance across different tasks. Methods And Evaluation Criteria: The methods and evaluation criteria (application domains, baseline architectures, and $\gamma$ ablation study) are generally sufficient. Given the computational claims of the paper, metrics demonstrating such overheads would make for a more comprehensive analysis. Theoretical Claims: Yes. The claims are derivational in nature and are reasonable. Experimental Designs Or Analyses: It would be helpful to see comparisons against other memory-saving techniques beyond RevViT. Supplementary Material: Yes. There are slight additions to the paper in the short appendix. Relation To Broader Scientific Literature: The authors provide a good overview of reversible neural networks, and ODE-based neural networks. The Bidirectional Integration Approximation is reviewed. Well known quantization strategies are mentioned. Essential References Not Discussed: None. Other Strengths And Weaknesses: The novelty and demonstrated effectiveness of the approach make this a strong paper. One weakness is the lack of detailed computational overhead analysis. Other Comments Or Suggestions: The paper would benefit from a more thorough analysis of the computational overhead introduced by BDIA-transformer. While memory savings are quantified, training time comparisons would provide a more complete picture. The paper mentions that RevViT has better memory efficiency than BDIA-transformer. A more nuanced discussion of this trade-off would strengthen the paper. The ablation studies could be expanded to include analysis of the impact of different quantization levels (beyond the 9-bit quantization considered). Questions For Authors: Could you elaborate on how BDIA-transformers might be integrated with other memory-saving techniques like gradient checkpointing, selective activation recomputation, and activation offloading? Are there specific combinations you've explored or would recommend? How sensitive is the method to different quantization levels, and what are the performance trade-offs? What specific challenges did you encounter when implementing BDIA for different transformer architectures, and how might these insights guide others implementing your approach? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: The authors thank all four reviewers for their appreciation of the novelty, simplicity, and effectiveness of our BDIA training technique for transformers. Notably, reviewer 7zwY states that __the novelty and effectiveness of the approach make this a strong paper__. Reviewer tmy4 states __minimally changes the architecture allows the use of this method for finetuning, which is a creative and powerful use case__. We would greatly appreciate the reviewers checking our detailed rebuttal responses and reconsidering their scores for our paper. __(1) Regarding the computational overhead introduced by BDIA-transformer.__ In the revision we will add a table indicating the average running time per epoch for training ViT, BDIA-ViT with online backpropagation, and BDIA-ViT without online backpropagation over CIFAR10. We will then explain that BDIA-ViT with online backpropagation saves training memory and improves validation performance at the cost of slightly longer training time than ViT. We will also include additional figures in the revision showing the convergence curves in terms of wall-clock time. __(2) Regarding a discussion of RevViT having better memory efficiency than BDIA-transformer__ We will elaborate in the paragraph at lines 369-376 in the experimental section regarding the benefit of RevViT with respect to memory efficiency. __(3) Regarding the impact of different quantization levels__ As also suggested by reviewers 8yrY and tmy4, we fully agree that conducting the ablation study on the impact of different quantization levels would be really helpful for understanding the behaviors of the BDIA training technique. We have performed additional experiments to test the performance of both ViT and BDIA-ViT for the quantization levels of $\{5, 0, -2\}$ over CIFAR10. In principle, a quantization level of $l=-2$ provides a very coarse quantization effect and should lead to degraded performance. As shown in the table below, when the quantization level goes from 5 until -2, the performance of ViT and BDIA-ViT decreases as expected. For $l=-2$, there is a large performance drop due to the very coarse quantization effect. One can also observe from the table that when $l\in \\{5, 0\\}$, BDIA-ViT performs consistently and significantly better than ViT. For the extreme case of $l=-2$, ViT performs slightly better than BDIA-ViT. This likely is because the very coarse quantization dominates the performance and negatively affects the BDIA-training technique. |quantization level |5|0|-2| | ------------------------ | ----- | --- | --- | |ViT|87.90|87.76|__84.31__| |BDIA-ViT | __89.50__ |__88.47__ |84.16| __(4) Regarding integrating BDIA-transformers with other memory-saving techniques__ There is one scenario where the BDIA training technique can be nicely combined with gradient checkpointing. We note that some transformer architectures may have special bottleneck blocks working as dimensionality transition (similar to the bottleneck blocks in ResNet). In the above case, a certain number of the transformer blocks of the same dimensionality are present between every two bottlenecks for efficient representation learning. In the above scenario, one cannot directly perform online backpropagation from the very top transformer block to the bottom one due to dimensionality inconsistency. In this case, one can apply gradient checkpointing to handle the bottleneck blocks, and apply BDIA to do online backpropogation for those transformer blocks of the same dimensionality between every two bottlenecks. __(5) What specific challenges did you encounter when implementing BDIA for different transformer architectures?__ Many thanks for the comment. We will release our source code once the paper is accepted so that other researchers can easily implement the BDIA training technique based on our source code. From our experience, BDIA without online backpropagation can be easily implemented for different transformer architectures by following the update expressions in the paper. The challenging part is to realize online back-propagation for BDIA, which we implemented by modifying the open source code for RevViT. Researchers who only want to improve the validation performance of a transformer can simply ignore the online-backpropagation part of BDIA. __(6) Regarding comparisons against other memory-saving techniques beyond RevViT.__ Many thanks for the comment. We plan to investigate the performance of the middle-point reversible transformer in the revision, of which the update expression is given by $x_{k+1}= x_{k-1}+ 2 \\left[\\textrm{FFN}_k( {x}_k+\\textrm{Atten}_k({x}_k)) + \\textrm{Atten}_k({x}_k) \\right], \\quad k=1,\ldots, K-1.$ It is expected that BDIA-transformer will still perform better than the middle-point reversible transformer because the latter one does not introduce any regularization into the neural network.
Summary: The paper proposes a novel type of reversible transformers with the aim to reduce the memory during training. To this end, this work treats each transformer block as the Euler integration approximation in a manner similar to Neural ODEs. There are two main contributions. Firstly, the authors borrow a technique from recent works on diffusion inversion for round-trip image editing, which involves bidirectional integration approximation. This approximation introduces a hyperparameter $\gamma$. The authors propose selecting $\gamma$ randomly either -0.5 or 0.5 for each training sample and training block. Consequently, the training can be viewed as an ensemble of ODE solvers. This regularization led to observed improvements on validation data. Secondly, to ensure reversibility, the authors propose performing activation quantization while storing side information. This approach is validated on small datasets involving image classification, machine translation, and language modeling. ---- Update after the rebuttal: ---- I have carefully read the authors' rebuttal as well as the comments from the other reviewers. After consideration, I have decided to increase the score. I hope the authors integrate the suggestions from the reviewers into the next version of the paper and also release the code. Claims And Evidence: There is a problem. Although the paper provides detailed mathematical derivations, they appear to be more closely aligned with concepts from residual networks (ResNets) rather than being specifically tailored to transformers. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes. I checked all the experiments. Supplementary Material: Yes. I checked all parts. Relation To Broader Scientific Literature: It is a relevant contribution to the literature as the investigated problem is timely: reducing the memory consumption during the training of transformers. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: - The paper addresses an important and timely problem: reducing the memory consumption during the training of transformers, which is particularly relevant given the current widespread use of transformer models. - The proposed idea is compelling as it retains the original architecture of transformers. This stands in contrast to existing approaches that typically involve modifications to the transformer architecture. Weaknesses: - The paper is difficult to follow. For example, the abstract is too long. - The reproducibility is low as there is no source codes or pseudo codes or detailed algorithms. - Although the paper includes thorough mathematical derivations, these seem to be more aligned with concepts from residual networks (ResNets) rather than focusing specifically on transformers. Notably, in equation (4), the authors treat the combined attention and feed-forward network modules as a residual term, resulting in derivations similar to those found in NeuralODEs with ResNets. However, these modules are key differentiators in transformer architectures compared to other models. - The experiments mainly consider small datasets or relies on toy examples for transformers. Other Comments Or Suggestions: No Questions For Authors: - In figure 1, and in line 287, how did the authors integrate into standard transformers? - In figure 2, the authors should show the reconstruction errors w.r.t. the proposed method using quantization and side information. Otherwise, it is not clear the effectiveness of these tricks. - The authors should compare experimentally the proposed methods against vanilla transformers applied with dropout. - Although the authors show the memory gains, they should show the convergences in terms of wall-clock time to see better the computational complexity introduced by the proposed method. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: The authors thank all four reviewers for their appreciation of the novelty, simplicity, and effectiveness of our BDIA training technique for transformers. Notably, reviewer 7zwY states that __the novelty and effectiveness of the approach make this a strong paper__. Reviewer tmy4 states __minimally changes the architecture allows the use of this method for finetuning, which is a creative and powerful use case__. We would greatly appreciate the reviewers checking our detailed rebuttal responses and reconsidering their scores for our paper. __(1) Regarding the paper readability__ Based on comments from all four reviewers, we will make the following changes in the revision to improve readability: (a) We will shorten the abstract to make it more concise. (b) As suggested by reviewer tmy4, we will modify Subsection 4.3 by including a proposition as well as a proof for that proposition. The proposition will relate to the reversibility of BDIA with quantization and side information. (c) The pseudocode for the BDIA training method will be included in the paper to enable the readers to easily understand the method. (d) A few additional ablation studies will be included in the revision. In particular, we will study the impact of dropout rates, number of transformer blocks, and quantization levels on the validation performance of BDIA-ViT. We note that all the above ablation studies demonstrate that BDIA-ViT performs consistently better than ViT in terms of validation accuracy. We will also present the computational overhead of BDIA-ViT with online back-propagation in the revision. __(2) Regarding the reproducibility and pseudocodes of BDIA__ In the revision, we will include the pseudocode for the BDIA training method. We will also release the source code after the paper is accepted so that other researchers will be able to adapt our source code to different transformer-based tasks. __(3) Although the paper includes thorough mathematical derivations, these seem to be more aligned with concepts from residual networks (ResNets) rather than focusing specifically on transformers.__ It is true that the derivations take a unified viewpoint, i.e., that architectures with skip connections can be regarded from a numerical integration point of view. Components of the particular schemes (ResNet, NeuralODE, Transformer, etc.) are simply regarded as __black-box__ functions in this view. This unified viewpoint makes BDIA applicable to not only transformers but also other schemes. The reason we focus on transformers in the paper is that transformers are commonly employed in models that (i) are very large and (ii) are the state-of-the-art in terms of performance. More-over, related methods have not been studied in the context of transformers. To reflect the well-motivated comment by the reviewer we will further strengthen the interpretation of our approach for transformers specifically in the final version of the paper. __(4) Regarding the small datasets in the experiments__ Many thanks for the comment. Firstly, as commented by reviewer tmy4, the BDIA training technique could be a good candidate for fine-tuning transformer-based models because BDIA retains the standard architecture in the inference procedure. Fine-tuning a pre-trained model often works with a dataset of small or intermediate size. Our experiment for fine-tuning GPT2 medium using a small dataset in the paper shows that BDIA produces promising results. Secondly, even though the considered datasets in our experiments are not very large, all three different training tasks (i.e., image classification, NLG, language translation) in our paper show that the BDIA training technique improves the validation performance of transformer consistently and considerably, demonstrating the potential applicability of the BDIA training technique in different applications. __(5) Regarding impact of dropout__ Firstly, to clarify, the experimental results of Table 1 and 2 in the paper are obtained by setting the dropout rate to be 0.1. We will revise the paper accordingly to clarify the ambiguity. Secondly, we conducted additional experiments by setting the dropout rate to be 0.0 and 0.2. The table below demonstrates that the dropout and BDIA can work together to improve the validation performance. |dropout | 0.0 |0.0 | |--------------|---------------|----------| | |ViT|BDIA-ViT| | CIFAR10| 86.27|__89.20__| |CIFAR100| 59.13|__64.45__| |dropout |0.2 | 0.2 | | |ViT|BDIA-ViT| | CIFAR10| 87.24|__88.22__| |CIFAR100|61.68|__64.24__| __(6) Regarding figure 2__ We will update figure 2 to demonstrate that the reconstruction error for BDIA with quantization and side information is zero. That is, __lossless__ online back-propagation is guaranteed with BDIA thanks to the quantization and side information. __(7) Regarding figure 1__ We will revise the caption of figure 1 to better explain how it is generated.
null
null
null
null
null
null
Clipped SGD Algorithms for Performative Prediction: Tight Bounds for Stochastic Bias and Remedies
Accept (poster)
Summary: The paper studies the problem of Performative Prediction: that is when the data distribution also depends on the model weights. This problem also includes the case of differentially private algorithms. The paper studies clipped SGD for finding a stable solution. Under various assumptions, the paper provides upper bounds and lower bounds for the distance to the stable solution in the strongly convex case. The algorithm extends to the nonconvex case and differential privacy. The paper further studies the DiceSGD algorithm with two clipping operators. Under the assumption that the error feedback is bounded in expectation, the authors show that the bias from the clipped algorithm goes to 0. ## Update after rebuttal: The authors have addressed my question. I maintain my positive assessment of the paper. Claims And Evidence: The theoretical claims appear to be correct. The experimental results support these claims. Methods And Evaluation Criteria: The evaluation is based on simple experiments and small datasets. For the purpose of validating the claims, they are fine. To show actually applications of the algorithm, the paper needs to show on real datasets with clear justifications/considerations for how the decision-dependent distributions are modeled. Theoretical Claims: - I checked Theorem 3 and 4, both are correct, except some typos. In the main paper, Theorem 3 made reference to $\sigma_{DP}$ before it was introduced later. In the proof in the appendix, $\zeta_t$ was also used (without proper introduction), and disappear in Line 647-652. - Validation of Assumption 8: The bound for $|e_t|$ is quite large in Figure 3, the question is how meaningful the bound $M$ is if it is so large. Experimental Designs Or Analyses: For the purpose of validating the theoretical claims, I think experimental setup in the paper is sufficient, although it is unclear the reason for the choice of the data distribution. Supplementary Material: Yes, part A, B, C, H. Relation To Broader Scientific Literature: The paper studies a problem of recent interest and the paper offers concrete theoretical guarantees for clipped SGD in this setting. Essential References Not Discussed: None that I'm aware of. Other Strengths And Weaknesses: - Personally I find the presentation hard to follow and I would prefer that the results and theorem statements are presented in a more concise way. Other Comments Or Suggestions: - In the proof of Lemma 3, the authors several times referred to Eq. 7 which didn't match the intended purpose. Questions For Authors: - Do you have any guarantees for the non-strongly convex case? - Is the dependence on the dimension necessary? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > To demonstrate practical applications of the algorithm, the paper needs to include real datasets with clear justifications/considerations for how the decision-dependent distributions are modeled. Please refer to the response to Reviewer svSm. > The appearance of $\zeta_{t}$ and $\sigma_{DP}^2$ in Theorem 3. Thank you very much for the careful review and we apologize for these careless typos. Indeed, Theorem 3 should be presented without $\sigma_{DP}^2$, while we remark that the theorem in L579-597 presents a general version of the result. As for the noise $\zeta_{t+1}$, it should appear in the inner product terms in L647-652. That said, it does not affect the next step in (29) as the noise is zero-mean. > Validation of Assumption 8: The bound for $|e_{t}|$ is quite large in Figure 3. The concern is how meaningful the bound is if it is so large. The bound is meaningful as we observe in Theorem 7 that the relevant term to it is in the order of $O( M^2 \gamma_t^2 )$ which can be dominated by the $O( \gamma_t )$ term as $\gamma_t$ is diminishing such as when $\gamma_t = O(1/t)$. Particularly, as seen in Fig. 3, the empirical upper bound for $M^2$ is around $10^3$, and its effect on $\| \tilde{\theta}_t \|^2$ becomes negligible ($\approx 10^{-6}$) with $\gamma_t = 50/(5000+t)$, $t \geq 10^6$. > Experiments: The reason for the choice of the data distribution is unclear. These choices follow from the literature in performative prediction such as [Perdomo et al, 2020]. For both cases in our experiments, the distribution shifts are designed to model strategic behavior where the data owners (users) maximize a quadratic utility function adapting to the new $\theta$, e.g., for logistics regression for defaults detection, we have $(x^\star,y) \sim {\cal D}(\theta)$ given by: $x^\star = \arg\max_{x} U(x;\bar{x},\theta) =\frac{- y <{\theta},{x}>}{\max\\{ \epsilon_{\sf U}, \| \theta \| \\}} - \frac{1}{2\beta} || x-\bar{x} ||^2$ where $(\bar{x},y)$ is drawn from the base distribution. In doing so, users with a default history shift their profile towards having a lower chance to be detected as fraud ($y=1$). > Typos Indeed, in the proof of Lemma 3, it should point to (2) instead of (7). We will revise it. > Do you have any guarantees for the non-strongly convex case? To our best knowledge, there is no known convergence guarantee for SGD with distribution shift with non-strongly convex loss function, even without clipping. The main challenge is that in the latter case, SGD (or even GD) may not converge to a unique solution even when the distribution shift is fixed. Having said that, the non-strongly convex case can be partially covered by our Theorem 5 & 8 for the non-convex case, provided that $\{ \sup_{z \in Z} |\ell(\theta_t ; z)| \}_{t \geq 0}$ is bounded. That said, we admit that the obtained bound is weaker than the strongly convex case. > Is the dependence on the dimension necessary? The dependence on dimensionality arises from the setting that $\zeta_{t+1} \sim {\cal N}(0, \sigma_{DP}^2 {\bf I})$ in (14) which is the same as in (Abadi et al., 2016) for satisfaction of the DP guarantee. This leads to $E[ || \zeta_{t+1} ||^2 ] = d \sigma_{DP}^2$. The same $d$-dependence would appear in Sec 3.3 of (Koloskova et al., 2023) if a similar design for the DP noise is adopted. Note that their paper took $\zeta_{t+1} \sim {\cal N}(0, (\sigma_{DP}^2/d) {\bf I})$.
Summary: The paper claims error bounds for the estimate obtained using projected clipped SGD (PCSGD) and DiceSGD algorithm in the problem of performative prediction, where an artificial noise can be added to preserve data privacy. While PCSGD is known for its stability, the output of the algorithm exhibits bias from the performative stable solution in this setting, as its lower and upper bound is proved in the paper. The expected bounds are also proven for DiceSGD, where its result is asymptotically unbiased. ## Update after rebuttal My questions about the paper have been clarified, and I intend to maintain my original recommendation score. Claims And Evidence: The expected error bounds of PCSGD and DiceSGD claimed in the proposed theorems are supported with proofs and numerical experiments, along with the effect of differential data privacy. Methods And Evaluation Criteria: The evaluated bias reported in the experiment result goes along with the claims made in the theory section about the estimation bias and differential privacy. Theoretical Claims: Theoretical claims seem sound. Experimental Designs Or Analyses: The designs for the numerical experiments sound valid. However, it seems there is no setting that the loss function is nonconvex, so the experiments reported does not validate the claim made regarding nonconvex loss (Theorem 5). Supplementary Material: I have check the supplementary materials, but the proofs for the theorems might not be carefully checked. Relation To Broader Scientific Literature: Data privacy issues are becoming an important issue in the digital age nowadays. For a more performant model, the size of the training data and the model itself are increasing, which also raises the risk of exposing individual information within the training data. It will become increasingly necessary to explore training methods that protect data privacy at a minimal cost to model performance. Essential References Not Discussed: No special reference I would like to discuss. Other Strengths And Weaknesses: In the both PCSGD and DiceSGD, the randonmized Gaussian noise $\zeta \sim N(0, \sigma_{DP}^2I)$ is employed for differential privacy guarantees, but the proposed $\sigma_{DP}^2$ for guraunteeing $(\epsilon,\delta)$-differential privacy also depends on the number of iterations $T$. It may be difficult to choose appropriate magnitude of the noise if the number of appropriate SGD iterations is not available at the first hand. Other Comments Or Suggestions: No additional comments. Questions For Authors: 1. Regarding to the result of the quadratic minimization (Figure 1), what is the randomized mechanism for differential privacy used in here? How exactly does the value of privacy budget $\epsilon$ effect the training? 2. Is there a particular reason that the clipped SGD is selected for performative prediction? If for stability, for example, other methods can be considered such as implicit SGD [1]. [1] Lee, Y., Lee, S., & Won, J. H. (2022, June). Statistical inference with implicit sgd: proximal robbins-monro vs. polyak-ruppert. In International Conference on Machine Learning (pp. 12423-12454). PMLR. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Experiments: There is no setting where the loss function is nonconvex, so the reported experiments do not validate the claim regarding nonconvex loss (Theorem 5). We conducted an additional binary classification experiment in **[https://ibb.co/Jjx0XMRg ]**, where we simulated PCSGD, DiceSGD, Clip21 [Khirirat et al., 2024] under different $\beta$-sensitivity. We adopted the sigmoid loss function: $$ \ell(\theta; z) = (1 + \exp(c \cdot y x^\top \theta))^{-1} + \frac{\eta}{2} \|| \theta \||^2 $$ which is smooth but **non-convex**. The data distribution ${\cal D}^o$ is generated by $Unif[-1, 1]$, and the label $y_i \in \\{ \pm 1 \\}$. The shift dynamics are given by $\\{(x_i - \beta \theta; y_i)\\}_{i=1}^{1e3}$. We observed that the SPS stationarity measures in (3) of the algorithms initially decrease and eventually saturate at a certain level, which is consistent with the predictions made by Theorems 5 and 8. > In both PCSGD and DiceSGD, the noise $\zeta \sim \mathcal{N}(0, \sigma^2_{DP}I)$ is employed for DP, but $\sigma_{DP}^2$ depends on $T$. It may be difficult to choose an appropriate $\sigma_{DP}$ if the number of required SGD iterations is not known in advance. This is a good observation. However, we believe that obtaining a DP bound independent of $T$ would be difficult under the current setting. The reason is that the potential attacker have access to the training history of $T$ iterations. The larger $T$ is, intuitively there will be more privacy leaks. We observe similar conclusions in [Abadi et al., 2016], [Zhang et al., 2024]. > In Figure 1, what randomized mechanism for differential privacy is used? How exactly does the value of the privacy budget affect training? We are using the Gaussian mechanism, see (14), for DP. In our numerical exp, the privacy budget affects the DP noise variance as studied in Corollary 1. > Is there a particular reason why Clipped-SGD is selected for performative prediction? If it is for stability, for example, other methods such as implicit SGD [1] could also be considered. The clipped SGD algorithm is selected in our context due to its popularity and wide range of applications, e.g., DP enabled training, robust optimization, etc. This motivated us to analyze its convergence behavior in the context of performative prediction, where the distribution shifts may occur naturally in several key applications of clipped SGD. Similar to clipped SGD which introduced non-smooth behavior to the stochastic process, we believe that analyzing algorithms such as Implicit SGD in the performative prediction setting is an interesting direction. Thank you for the suggestion. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough and detailed response. My questions about the paper have been clarified, and I intend to maintain my original recommendation score.
Summary: This paper examines the convergence behavior of clipped stochastic gradient descent algorithms in the performative prediction setting, where the subsampling distribution depends on the previous iterate. The theoretical analysis addresses both strongly convex and non-convex objective functions. Claims And Evidence: The claim is clearly stated and well-supported by both theoretical proofs and empirical results. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: The experimental design seems reasonable to me. Supplementary Material: I have reviewed most of the materials provided. Relation To Broader Scientific Literature: (DP) Optimization Essential References Not Discussed: Please refer to my response to Question 2. Other Strengths And Weaknesses: Strengths: * The paper provides a rigorous analysis of clipping bias and convergence in the performative prediction setting, addressing both convex and non-convex objectives. Weaknesses: * The analytical techniques employed in this paper are largely grounded in existing literature. For example, the privacy accounting builds on the moment accountant framework proposed by Abadi et al. (2016), while the convergence analysis draws from methodologies developed by Li et al. (2022), Perdomo et al. (2020), and Zhang et al. (2024). * The treatment of distribution shift is based on a worst-case bound. It would be valuable to further investigate the interaction between distribution shift and differential privacy, as this could uncover additional insights into the practical implications of the proposed algorithms. Other Comments Or Suggestions: * Lines 301–302: The claim that the algorithm achieves $(\varepsilon, \delta)$-Rényi Differential Privacy is not standard. Typically, Rényi DP (RDP) is parameterized solely by $(\alpha, \varepsilon)$, where $\alpha > 1$ is the Rényi order. The inclusion of a $\delta$ parameter suggests a composition or conversion to approximate DP (i.e., $(\varepsilon, \delta)$-DP), which should be clarified. Please revise the statement to reflect the standard formulation of RDP or explain the intended interpretation. * Algorithm 1, Line 1: Is $e_t$ a scalar or a vector? The notation is ambiguous and could benefit from clarification. If $e_t$ represents a step size, noise scale, or another parameter, please specify its role and dimensionality explicitly to avoid confusion. Questions For Authors: * The paper considers two cases: strongly convex and non-convex objectives. Would similar theoretical guarantees or insights extend to the convex (but not strongly convex) case? Clarifying whether the techniques or results generalize to this intermediate setting would strengthen the contribution. * I was unable to locate a dedicated related work section. Are there key prior works the authors could discuss to better contextualize their contributions? A comparison to existing results in differentially private optimization—particularly those involving clipping techniques—would be valuable * Could the author further clarify the novelty of this work? The contribution remains somewhat unclear, particularly regarding the core message the paper intends to convey. If the primary goal is to analyze clipped (stochastic) gradient descent, then what is the specific technical contribution? It appears that certain assumptions—especially Assumptions A.5 and A.8—simplify the analysis and allow the application of existing techniques for studying clipped gradient descent. * Continuing from the previous point, it remains unclear whether clipped-SGD-based performative prediction is significantly different from standard analyses of clipped SGD under Assumption A.5. The authors are encouraged to clarify this distinction and elaborate on whether the performative aspect introduces fundamentally new challenges or insights beyond existing clipped SGD frameworks. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Difference in analysis vs existing works. Indeed, some of the techniques we applied are standard and grounded in existing works, yet we emphasize that this is the first rigorous study of clipping bias with performative prediction. The lower bounding result in Theorem 4 is new as it emphasizes on the tightness of the upper bounds in Theorem 3. It shows that bias analysis in Theorem 3 is unlikely to be improved with different analysis. We elaborate more on this point in our response to your last comment. > Interaction between distribution shift and differential privacy beyond worst case We agree, especially as the co-evolution of DP clipped SGD algorithm and distribution shift is an exciting venue for future investigation. That said, our results derived from worst case sensitivity offers a faithful approximation of possibly non-linear interaction. Evidences are (i) the form of distribution shift with linear dependence on $\theta$, which achieves the worst case sensitivity bound, is motivated by a best response dynamics with utility maximization [Perdomo et al., 2020], also see response to rev. KRfQ; (ii) Theorems 3, 4 show that our analysis under the sensitivity assumption is tight. > Statement about Rényi-DP and $e_t$. We apologize for the careless typos. - The discussion about Renyi-DP in L302 (right col) refers to $(\epsilon,\delta)$-DP as in Def. 6. In Appendix A.2 [Zhang et al., 2024], the DiceSGD is first shown to satisfy Renyi-DP, and the standard DP property is established subsequently. - The term $e_t$ is in $R^d$. We will revise notations throughout the paper. > Q: Guarantees for the convex case. We refer to the response to Rev. KRfQ. In short, Theorems 5 & 8 can partially cover the non-strongly-convex case, yet the analysis is still an open problem for performative prediction using SGD in general, even without clipping. > Q: Comparison with existing results involving clipping techniques in DP optimization. Our work is the first to address the convergence of clipped SGD in performative prediction. Below is a brief comparison to related works: Vs. [Mendler-Dunner et al., 2020] & others on performative prediction -- We conduct analysis for SGD with distribution shifts to clipped SGD which entails a non-continuous mean-field. Such analysis is the first of its kind to our knowledge. Vs. [Koloskova et al., 2023] & others on clipped SGD -- We study the effects of distribution shift on clipped SGD dynamics with a non-gradient mean field prior to clipping. One new finding is that the clipping bias crucially depend on the strength of distribution shift. Subject to space limitation, we will include a detailed comparison in the revised version. > Q: Clarify (i) novelty and main message, (ii) distinction between Clipped-SGD-based performative prediction and standard analyses of Clipped-SGD, and elaborate on the effect of performativity. Our main contribution and novelty lie in conducting the first rigorous study of *clipped SGD algorithms in the presence of distribution shift (i.e., performative prediction)*. We show that there is an inevitable bias that can be excaberated by distribution shifts, and show that DiceSGD can remedy the bias by extending the latter analysis. Besides the above takeaways for practitioners, here are some novel technical challenges overcome by this work: - For the stochastic processes generated by PCSGD or DiceSGD, the clipping operator turns the dynamics into a non-continuous one, and distribution shift introduces non-gradient mean-field behavior, it is not possible to adopt existing analysis to work out the convergence analysis. - For PCSGD, a notable challenge is in Lemma 4 that controls the error of clipped stochastic gradient $b_t = clip_{c}( \nabla \ell(\theta;Z) ) - \nabla f( \theta; \theta ))$. Existing work such as [Zhang et al., 2020b] only took a crude bound insufficient for our analysis. Our final result yields a bias of $(max\\{G-c,0\\})^2$ that becomes bias-free with large $c$. As comparison, the bias in [Koloskova et al., 2023] is $O( min\\{ \sigma, \sigma^2/c \\})$ which does not vanish even if $c>G$. - For DiceSGD, existing result in [Zhang et al., 2024] only works for constant step and non-convex setting without distribution shifts. We made a major revamp including (38) to handle time varying step, (42) to handle non-gradient pre-clipped mean field, etc. - We also respectfully disagree with that using A5 & A8 "simplify the analysis and allow the application of existing techniques". A5 is general assumption used in almost all papers on performative prediction, it does not directly simplify the analysis as the effects of distribution shift on PCSGD recursion is non-explicit. A8 is used in controlling the error in DiceSGD in (42), again its role is not immediately clear from a direct inspection. The non-convex loss case also require a non-obvious design for Lyapunov function as $f( \tilde{\theta}, \theta )$, see Appendix G.
Summary: In this work the authors study the convergence of clipped stochastic gradient descent (SGD) algorithms with decision-dependent data distribution. They explain the performative prediction problem, which is a more general and challenging problem than standard optimization, and they define the performative stable (PS) and stationary PS (SPS) solutions of this. In order find a PS solution they introduce the Projected Clipped SGD (PCSGD) and provide convergence guarantees of this algorithm for stongly convex and general non convex objectives. They also extend their analysis in the Differential Privacy (DP) setting. Additionally, they adapt the previously introduced DiceSGD algorithm for the performative prediction problem and they also offer convergence guarantees of this algorithm for stongly convex and general non convex objectives. Finally, the authors provide numerical experiments applying their algorithms PCSGD and DiceSGD in quadratic minimization and logistic regression problems. Claims And Evidence: Most of the claims are clear with supporting proofs, however I think the presentation of the theorems needs some improvement. For example, in Theorem 3, what is $\sigma\_{DP}^2$? I understand what $\sigma\_{DP}^2$ is in the context of Section 3.2, however it has not been introduced earlier. Methods And Evaluation Criteria: The authors propose the PCSGD and DiceSGD to solve the performance prediction problem and they provide guarantees that their algorithms converge in a PS solution up to a neighborhood. My main concern is the number of assumptions they make to guarantee convergence. For example, Theorem 3 assumes strong convexity, $\beta$-sensitivity, smoothness *and* bounded gradients (Lipschitzness). In contrast, [Perdomo et al, 2020, Thm 3.5] assume strong convexity, $\beta$-sensitivity, and smoothness. Assuming both smoothness and bounded gradients is too strong. Another example, Theorem 4 assumes smoothness, some kind of bounded variance, bounded gradients (Lipschitzness) and boundedness of $\ell$ on top of the standard $\beta$-sensitivity. Again if we compare with [Koloskova et al, 2023, Thm: 3.3] (essentially $\beta=0$) they only assume smoothness and bounded variance. Theoretical Claims: I have checked all of the proofs in the appendix. There a few points I don't understand. As I said before, I don't understand what $\sigma\_{DP}^2$ is in Theorem 3. When I check the proof I see that it appears in the proof of the intermediate Lemma 3, in line 635. If I understand correctly you have $$\begin{align} \theta\_{t+1} &=P\_X(\theta\_t-\gamma\_{t+1}clip\_c(\nabla\ell(\theta\_t;Z\_{t+1})))\text{ by line 129} \\ &=P\_X(\theta\_t-\gamma\_{t+1}\tilde{\nabla}g(\theta\_t))\text{ by line 591} \end{align}$$ So no need for $\sigma_{DP}^2$? Do I miss something? Experimental Designs Or Analyses: The authors provide numerical experiments applying their algorithms PCSGD and DiceSGD in quadratic minimization and logistic regression problems focusing only on the DP setting. In the logistic regression experiments, they only try PCSGD/DiceSGD with different $\beta$. I would expect to have comparisons with other algorithms as well. It would also be interesting to see a few more real-world experiments, for example in the intro, you mention "A common application scenario is that the training data involve human input that responds strategically to the model" Supplementary Material: I have checked all of the proofs in the appendix. Relation To Broader Scientific Literature: The authors propose two new methods, PCSGD and DiceSGD, for solving the performative prediction problem. This problem was introduced and solved in [Perdomo et al, 2020]. The clipped SGD has been studied extensively in the literature and DiceSGD was introduced in [Zhang et al, 2024]. A recurring issue is that the convergence guarantees that the authors provide, make a lot more assumptions than either of the previous works. Essential References Not Discussed: All essential references are discussed. Other Strengths And Weaknesses: Strengths: 1. Propose novel methods, PCSGD and DiceSGD, to solve the performance prediction problem. 2. Provide convergence guarantees for these methods. 3. Validate their theory via numerical experiments. Weaknesses: 1. What is $\sigma_{DP}^2$ in Theorem 3? (see Claims And Evidence/Theoretical Claims) 2. The authors make many more assumptions for their convergence results compared to the literature (see Methods And Evaluation Criteria) 3. There could be a few more experiments (See Experimental Designs Or Analyses) Other Comments Or Suggestions: None. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you for your comments and careful review. Our point-by-point replies are listed below. > The definition of $\sigma_{DP}^2$ in Theorem 3? We apologize for the careless typo. Indeed, Theorem 3 should be presented without $\sigma_{DP}^2$. This term is actually introduced later in (14) and is included in the analysis of L579-597 that generalizes Theorem 3. > Additional bounded gradient assumption vs [Perdomo et al., 2020]. We made sure that the assumptions needed are justified for common performative prediction problem setups. For the bounded gradient assumption (A3), we have only required $\nabla \ell$ to be bounded over $\theta \in {\cal X}$. - For strongly-convex $\ell$: as we focus on **projected** clipped SGD, if ${\cal X}$ is compact, then A3 is naturally implied using the Lipschitz gradient assumption (A2). As such, we respectfully disagree with "assuming both smoothness and bounded gradient is too strong" since the first implies the latter in compact sets. Note that this projection-based algo, however, prompted us to develop a more general analysis than [Perdomo et al., 2020] & [Mendler-Dunner et al., 2021]. - For non-convex $\ell$, loss functions such as sigmoid loss satisfy A3, and such condition is needed also in SOTA results such as [Li and Wai, 2024]. > Additional Assumptions on bounded gradients (A3) and boundedness of $\ell$ (A7) used compared with [Koloskova et al., 2023, Theorem 3.3]. We emphasize that the inclusion of distribution shift $\beta>0$ has led to significant challenge in our proof for the non-convex $\ell$ case. It is unfair to directly compare our result with that in [Koloskova et al., 2023] which assumes $\beta=0$. To see the challenge, consider a special case with $c \gg 1$ where there is no clipping. The PCSGD with ${\cal X} = \mathbb{R}^d$ is then reduced to SGD-GD in [Mendler-Dunner et al., 2021]. As discussed in [Li and Wai, 2024] and references therein, analyzing SGD-GD **without strongly convex $\ell$** has not been successfully done without extra assumptions such as assuming nonlinear least square loss, or bounded $\ell$ as in [Li and Wai, 2024]. In this regard, we only use a similar set of assumptions as [Li and Wai, 2024]. We also remark that bounded gradient assumption is also used in [Zhang et al., 2020b] for clipped SGD analysis. Lastly, as explained above, the set of assumptions for the non-convex case are justified too, e.g., with sigmoid loss as $\ell$. > Experiment: Compare PCSGD and DiceSGD with more algorithms? Conduct more real-world experiments where training data involve human input that responds strategically to the model. We conducted an additional experiment comparing PCSGD, DiceSGD with DP-Clip21-GD algorithm proposed in [Khirirat et al., 2023], see [https://ibb.co/DD9FxR71 ] -- we adopted $n=1$ & set the other param.s according to (65) in [Khirirat et al., 2023]. Observe that DiceSGD achieves better convergence. We fully agree with the necessity to conduct real world experiments. However, as an academic paper focusing on the theoretical understanding of performative behavior, we are unable to provide real world experiments at the moment due to limited resources. We notice that [Perdomo et al., 2020; Hardt & Mendler, 2023] also reuse standard static datasets and design distribution shift dynamics like we do, e.g., in the logistics regression example on the GiveMeSomeCredits dataset. There is indeed a challenge in designing real world experiments as such experiments would inevitably involve human subjects, where prior approval for compliances with ethical standards will be needed. Overcoming such challenge is beyond the aim of this research whose goal is to advance theories of performative prediction. Nevertheless, it is a promising research direction for the community.
null
null
null
null
null
null
The Hidden Dimensions of LLM Alignment: A Multi-Dimensional Analysis of Orthogonal Safety Directions
Accept (poster)
Summary: **Post-rebuttal edit: the authors have provided detailed and convincing responses to my concerns during the discussion phase, meaning I'm happy to increase my score from 3 to 4. I believe this paper deserves to be at ICML.** --- This work performs a multi-dimensional analysis of the shift in representations $\mathcal{T}(\textbf{x})$ induced by safety fine-tuning in language models. It does so by approximating this shift as an affine transformation $\textbf{W}\textbf{x}+\textbf{b}$ then analysing the mathematical properties (e.g. principal components, effective rank) of the matrix $\textbf{W}$. Attempts are then made to relate the principal components to safety-relevant semantic concepts (e.g. refusal), including via the layerwise relevance propagation (LRP) method for token attribution. It is found that the first principal component predicts refusal (albeit somewhat less reliably than a dedicated probe vector for this behaviour) while subsequent ones are thought to represent secondary features such as role-playing. Interventional attack analyses are then used to demonstrate the causal effect of these directions on model behaviour. Claims And Evidence: The claims of this paper can be broadly divided into: 1. descriptive claims about the geometry of representation shifts during fine-tuning; 2. causal claims about the impact of specific shift directions on model behaviour. The former are largely predicated on the accuracy of the linear shift approximation, but this is not empirically verified (see "Methods And Evaluation Criteria"). The latter seem to be quite well supported by the intervention experiments, although I do have some concerns and questions about these (see "Experimental Designs Or Analyses"). Methods And Evaluation Criteria: The method begins by learning the linear approximation $\textbf{W}\textbf{x}+\textbf{b} \approx \mathcal{T}(\textbf{x})$ then performs an analysis on this approximation. In itself, I find this to be quite an interesting idea, but I do have two important concerns: - Nowhere in the paper can I *how accurate this linear approximation is*. You make a few references to the linear representation hypothesis, but from my understanding, this hypothesis does not imply anything about how representations can shift during fine-tuning. How accurate are the approximations in these experiments? - The entire analysis is performed on the matrix $\textbf{W}$, with no consideration of the bias vector $\textbf{b}$. Surely if we are to understand the directions in which representations shift, this bias term could be quite important? I would appreciate it if you could address this in your response. If there is some reason why it is reasonable to ignore the bias term, I would be very happy to hear about it and update my opinion of the method accordingly. Theoretical Claims: This work is primarily empirical in nature. One theoretical aspect is Theorem 3.2, and I somewhat doubt its utility. You start Section 3.1 by saying that you're interested in "whether the components in the residual space contain interpretable features", but then (after giving the theorem) admit that it "does not guarantee human-interpretable features". In that case, I really don't see what benefit it provides, other than give an air of theoretical rigor to what is ultimately an empirical exercise. Experimental Designs Or Analyses: Overall, the experimental design is well-structured and easy to follow. I have a few comments about the presentation: - The LRP results seem to mainly focus on a single handcrafted example, so your interpretations may be overfitted. It would be good to see an aggregated analysis across many examples. - Figure 4 shows that ablating the L14-C6 direction selectively impairs the model's response to PAIR attacks, but lacks baselines against ablating other directions. These baselines would be helpful to show that L14-C6 is especially influential. - Figure 6 shows that the attacks change the projection onto the dominant component. In the text you say that this "leads to a lower refusal rate", but this refusal rate is not actually plotted in Figure 6. I feel this should be included. Supplementary Material: Appendix reviewed, but Supplementary Material not reviewed. Relation To Broader Scientific Literature: This paper falls within the area of mechanistic interpretability for language models. By focusing more specifically on interpreting *changes* in model internals as a result of fine-tuning, it attempts to tackle an under-explored and safety-relevant problem. Essential References Not Discussed: None. Other Strengths And Weaknesses: None; all mentioned in other sections. Other Comments Or Suggestions: - There are typos (e.g. "experiemental") and grammatical errors (e.g. "Imagine you conducting") in the handcrafted example in Table 1. - In Section 5.2, you say that Figure 3 shows that "the model's harmfulness prediction accuracy plateaus after layer 15", but this is a plot of *refusal* prediction accuracy. This comment therefore seems misleading. - The baseline methods for jailbreak prompts are not discussed anywhere in the main body of the paper. I suggest you add a sentence briefly introducing these, and pointing to Appendix C.1 for more details. Questions For Authors: 1. Can you respond to my concern about the accuracy of the linear approximation? ("Methods And Evaluation Criteria" section) 2. Can you respond to my concern about the omision of $\textbf{b}$ from the analysis? ("Methods And Evaluation Criteria" section) 3. When discussing Figure 2, you say how the effective ranks of $1$ in early layers suggests "that safety training neither introduces nor strengthens new directions -- this aligns with the mid-early safety layer hypothesis". I don't quite follow this argument. Could you put it in different terms, perhaps with more explicit reference to the underlying geometry? 4. Why is the actual refusal rate not plotted in Figure 6? Currently the reader has to take your word for it that the attacks actually modify the model behaviour in the expected way. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. We are encouraged that the reviewer acknowledges our novelty and well-designed experiments. We will carefully address your concerns below: > **Q1:** How accurate are the approximations in these experiments? You raised a good point. We did not initially include the accuracy of the residual space map because our framework does not strictly require the activation transformation $T(X)$ to be entirely linear. However, we agree with you that it would be problematic if the map only poorly approximated $T(X)$ with low accuracy. We provide the Mean Square Error evaluation for approximations in the following table. We found the MSE between finetuned activations and the approximation is negligible compared to the Mean Square Norm of activations. This suggests that **in our safety training case, the learned affine map accurately captures $T(X)$.** **[[ View table of MSE]](https://shorturl.at/e6ac4)** We will discuss and report this result in the revised paper. > **Q2:** The entire analysis is performed on the matrix W, with no consideration of the bias vector b. You raised a very good question. **We excluded bias vector b from this analysis because it does not affect extracted feature directions**. We study feature directions in this paper, which are directions in activation space activated by certain context features. Since **b** represents the component of the affine map independent of the activation $X$, it acts as a constant offset regardless of the input. Hence, the direction represented by **b** is not a feature direction as defined and is not addressed by our methods, although we agree it might carry useful semantic meaning. Alternatively, performing SVD and selecting the right singular vectors finds the *directions with the largest variance* in the residual space. As **b** is the global offset, it does not affect variance. > **Q3:** Figure 4 shows that ablating the L14-C6 direction selectively impairs the model's response to PAIR attacks, but lacks baselines against ablating other directions. You raised a valid point. In the revised paper, **we have updated Figure 4 to include results for L14-C4 and L14-C5 to show the selective impact of L14-C6**. Please see the figure and caption below. **[[View updated Figure 4]](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/ZzTJiZ5HC52b8cqa9E1g3.png)** >**Q4:** Effective ranks of 1 in early layers suggests "that safety training neither introduces nor strengthens new directions...". I don't quite follow this argument. When the effective rank is 1, the variance in the activation transformation is mainly captured by a single direction, indicating that activations are changing along a 1-d line. Compared to the much higher ranks (around 100) observed in mid-to-late layers, we conclude that no substantial learning of new features happens in the early layers. > **Q5:** Refusal rate is not actually plotted in Figure 6. I feel this should be included. We initially omitted the refusal rate from Figure 6 due to (1) figure design considerations and (2) this metric primarily reflects output behavior rather than showing internal representation changes. However, we agree that including this data will strengthen the presentation of our results. We provide the refusal rate after intervention on the test set in the table below. We can see non-dominance suppression significantly reduces the refusal rate for jailbroken samples, while having a small impact on harmful and benign samples. **[[View table of refusal rate in Figure 6]](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/sBYs5rTPEAspyW1k10cjI.png)** We will discuss and provide these results in Section 6. > **Q6:** It would be good to see an aggregated analysis across many examples for LRP results. In the paper, we used a handcrafted example to showcase PLRP results mainly for visualization purposes, as we found it difficult to understand relevant tokens without concrete context. . **The following table demonstrates that the top relevant tokens remain interpretable even when aggregated across the test set.** **[[View table of aggregated evaluation]](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/1DO2yA6CUI_lJU6zez-Y6.png)** We will provide this aggregated evaluation in the revision > **Q7:** Theorem 3.2 Utility You raised a really good point. We agree that our primary claims are supported by empirical evidence. **Our theorem mainly provides *a motivation for later analysis***, suggesting that the residual space can be analyzed similarly to feature directions. Our experimental results support the theorem and provide deeper insights beyond the theorem statement. In the revised version, we will explicitly discuss the purpose of our theorem. > **Q8:** Other Comments Or Suggestions Thank you for pointing these out! We will address these points in the updated version. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I've replied only where I feel I have more to add (all on the topic of the bias term). **Q2:** This is helpful, but I'm still not quite satisfied here. I don't follow you claim that *"the direction represented by $\textbf{b}$ is not a feature direction"*. Would it not be meaningful to measure the extent to which $\textbf{b}$ aligned with (for example) a "harmlessness" direction, in which case we could say that fine-tuning tends to shift all activations to be more harmless? **Q4:** You say that an effective rank of $1$ indicates that *"activations are changing along a 1-d line"*, but is it not more accurate to say that the *variation* in the way activations are changing lies along a 1-d line? This brings me back to the bias term again. From my understanding, an effective rank of $1$ actually means that the activations tend to change in the direction given by $\textbf{b}$, while "spreading out" along some other direction given by the top principal component. This seems to be an important distinction to me, which surely changes the interpretation of most (if not all) of your results. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We couldn’t fully discuss this earlier due to the length limit. You gave a good example: after safety fine-tuning, activations shift toward a “harmlessness” directions on average. This global shift is captured by the bias term **b**. We agree with your example and we’d like to follow your example, explaining why we ignore **b**. To clarify, we decompose b into orthogonal directions: $b = \sum_{i=1}^n v_i$. We consider two types of $v_i$: - Type 1: $v_i$ is in W, meaning it has positive projections onto the safety residual space. - Type 2: $v_i$ is not in W, meaning it’s orthogonal to all feature directions and exists only in b. Since our methods focus on W, we have studied type 1 $v_i$ in our framework. Our main reason to ignore b is that **type 2 $v_i$ isn’t a feature direction, can’t be analyzed in our framework, and isn’t meaningful to study.** As type 2 $v_i$ isn’t in W, it’s not linearly correlate to input activations. Therefore it can’t be explained by linear input features. This contradicts the definition of a linear feature direction. As required by Definition 2.1, we cannot find a context pair that moves activations in this direction, as it’s independent of input context. Past researches has support that most meaningful feature directions **depend on specific inputs**. For example, [1] calculates the refusal direction by subtracting safe sample activations from unsafe samples. Similarly, [2] shows harmless output features are related to activation regions with toxic inputs, which DPO/SSFT learns to avoid. We understand and agree your example shows b can includes a global shift toward harmlessness. Past works[2] have supported your hypotheses. **But b representing a “harmlessness” direction doesn’t mean W fails to capture it**. As long as shifts in the “harmlessness” direction correlates with activations or input, which we believe holds for most safety-related directions, then W can incorporate it as a type 1 $v_i$. Our experiment in Figure 3 confirms this: the first component in W - I predicts model output with high accuracy. As an example, our training set contains both harmful and benign samples. A type 2 direction in b would means affecting the refusal rate regardless of content, which we think is less meaningful to study. Since it’s not triggered by input context, our PLRP method doesn’t apply, so we exclude it. **In summary, we ignores bias term b because it doesn’t fit in our framework, and we lose no meaningful directions by doing so**. On your **Q4** comment, thank you for the valuable question. As **b** is an activation offset, it shouldn’t affect the effective rank, which is a metric of variance. In practice, we compute the effective rank using W - I eigenvalues, also didn’t use **b**. We hope this clarification has make our answer to **Q4** clearer. We agree that the 1 rank might correspond to the direction of **b**, as normally direction of larger offset comes with larger variance. While this seems to support that key directions are captured by W, it requires further validation to confirm. We agree that measuring the global offset of activations (represented by b) is meaningful and provides important insights into the alignment mechanism. Previous studies have explored activation changes before and after safety fine-tuning (see our related work). In this paper, our contribution lies in applying PLRP and intervening on the decomposition of the safety residual space, making us the first to provide a detailed interpretation of ‘what is learned in the activation shift.’ In other words, we not only demonstrate that activations are pushed along safety-related directions but also offer a specific interpretation of those shifts. Thank you so much for your interest in our work! We hope above response has made the answer clearer. We’re more than happy to address any more questions or concerns you have. [1] Arditi, Andy et al. “Refusal in Language Models Is Mediated by a Single Direction.” ArXiv abs/2406.11717 (2024). [2] Lee, Andrew, et al. “A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.” ICML 2024.
Summary: The paper focuses on the mechanisms of safety alignment in large language models, exploring how the internal representations of the model's refusal of harmful inputs manifest as multiple orthogonal directions in activation space. It introduces the concept of the "safety residual space" and identifies dominant and non-dominant directions through linear decomposition of representation shifts during the safety fine-tuning process. The paper designs and validates intervention experiments (such as removing specific trigger words) to reveal vulnerabilities in safety alignment, offering a new multidimensional perspective on understanding model safety. Claims And Evidence: I think claims are not entirely substantiated by the presented results and evidence. The current analysis primarily focuses on a specific model (Llama 3 8B) and a specific dataset, and it remains unclear whether the findings can be generalized to other models or application scenarios. Methods And Evaluation Criteria: Yes Theoretical Claims: I did not fully verify the correctness of the proofs for the theoretical claims. Experimental Designs Or Analyses: Yes The current analysis primarily focuses on a specific model (Llama 3 8B) and a specific dataset, and it remains unclear whether the findings can be generalized to other models or application scenarios. Supplementary Material: Yes Relation To Broader Scientific Literature: The paper introduces a multidimensional perspective on safety alignment, demonstrating through experiments how orthogonal directions in the safety residual space influence model behavior Essential References Not Discussed: No obvious omissions Other Strengths And Weaknesses: Strengths: 1. The paper proposes a multidimensional perspective on safety alignment and demonstrates through experiments the explanatory power and interactions of orthogonal directions in the safety residual space on model behavior. 2. The experimental results reveal that adjusting trigger words can bypass the model's safety defenses, providing new insights for improving model security. Weaknesses: 1. The current analysis primarily focuses on a specific model (Llama 3 8B) and a specific dataset, and it remains unclear whether the findings can be generalized to other models or application scenarios. 2. Although the paper proposes methods for bypassing safety mechanisms, the discussion on how to improve model defenses based on these findings to prevent similar vulnerabilities is somewhat lacking. Other Comments Or Suggestions: Please read Question part. Questions For Authors: Are there any safety risks or side effects associated with intervention experiments on different non-dominant directions? How can intervention strategies be optimized to maintain both the overall capability and security of the model? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable suggestions. We will carefully address your questions in the following. > **Q1:** The current analysis primarily focuses on a specific model (Llama 3 8B) and a specific dataset, and it remains unclear whether the findings can be generalized to other models or application scenarios. Thank you for your insightful suggestions. We will answer your questions in two parts: **Generalization to other models.** Our evaluation focused on Llama 3.1 8B as a case study for mechanistic interpretation. We agree that evaluation on models of different scales would further strengthen our claims. To this end, we now include a safety residual space analysis for **Llama 3.2 3B** to verify that our findings generalize across scales. As shown in the figure below, the residual space for Llama 3.2 3B shows similar low-rank linearity and interpretability to Llama 3.1 8B. **[[View figure of analyzing Llama 3.2 3B]](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/3VaIR7muY-H_EalTIQhfh.png)** We will add this additional figure to the Appendix in the revised paper and reference it in the main body. **Generalization to other application scenarios.** Our current evaluation covers over 6 strong jailbreak baselines, representing different real-world adversarial attack strategies. By using jailbreak attack as an representative scenario, our works give mechanistic insight on safety features that can apply to other applications (e.g. backdoor attack or membership inference). We agree that expanding the scope of our safety residual space analysis to more scenarios would be valuable, and we plan to explore this in future work. > **Q2:** Although the paper proposes methods for bypassing safety mechanisms, the discussion on how to improve model defenses based on these findings to prevent similar vulnerabilities is somewhat lacking. Thank you for your insightful suggestions. In our conclusion, we briefly mentioned promising directions for improving model defenses based on the safety residual space. We can elaborate on two key methods: - **Interventions for Robustness.**  While interventions can remove safety ability by targeting specific directions in the residual space, they could also be used to strengthen defenses. Our framework identifies multiple crucial safety feature directions. This information could guide model editing techniques [1, 2] to selectively mitigate vulnerabilities by modifying features associated with specific non-dominant directions. - **LLM Red-Teaming.** Automated LLM Red-Teaming methods [3] could benefit from our findings too. Using PLRP on residual space components to identify trigger tokens offers a more mechanistic way to discover potential vulnerabilities. Red-teaming pipelines could incorporate our analysis to probe for weaknesses from internal representations more efficiently. We will incorporate a detailed discussion of potential defense improvements in the revision. > **Q3:** Are there any safety risks or side effects associated with intervention experiments on different non-dominant directions? How can intervention strategies be optimized to maintain both the overall capability and security of the model? Thank you for the valuable questions. We'd like to gently note that in our paper, intervention mainly serves as an **experimental tool** to validate the causal effect of non-dominant directions on safety behavior. We agree with you that there are potential risks and optimizations: - **Safety risks**: As demonstrated, targeted interventions removing specific non-dominant directions can weaken learned safety features, potentially offering a low-cost method to maliciously un-align a model. - **Side effects:** Appendix C.4 shows that interventions can slightly degrade general task performance, although in our experiments, the impact was relatively contained compared to alignment methods like DPO. - **Potential Optimization**: Our intervention generally follows established methods used for mechanistic analysis [4]. Optimizing inference-time interventions[5] is an active research area. Factors like intervention strength, targeted directions and the modules affected all affects the trade off. We are also interested in exploring this direction in future works. [1] Wei, Boyi, et al. "Assessing the brittleness of safety alignment via pruning and low-rank modifications." ICLR 2024. [2] Uppaal, Rheeya, et al. "Model editing as a robust and denoised variant of DPO: A case study on toxicity." ICLR 2025. [3] Lin, Lizhi, et al. "Against The Achilles' Heel: A Survey on Red Teaming for Generative Models." Journal of Artificial Intelligence Research 82 (2025). [4] Lee, Andrew, et al. "A mechanistic understanding of alignment algorithms: A case study on dpo and toxicity." ICML 2024. [5] Li, Kenneth, et al. "Inference-time intervention: Eliciting truthful answers from a language model." NIPS 2023. --- Rebuttal Comment 1.1: Comment: Thanks for your explanations and additional experiments. I have no extra questions. I would be inclined to defer to the opinions of the other reviewers. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable comments and suggestions! We will incorporate your suggestions to improve our paper.
Summary: This paper investigates which feature directions are used by safety-tuned language models to determine whether or not to refuse a request. This is done by optimizing an affine mapping to approximate the activations of a safety-tuned model given the corresponding activations in the pretrained model before safety tuning, referring to this map as the *safety residual space* (1). The paper presents a theorem stating that all directions in the safety residual space decrease the model's loss on net. The paper then safety-tunes LLaMA 3.1 models (one using DPO and one using self-supervised finetuning (SSFT)) and examine the safety residual spaces of these models. The paper looks at the "effective rank" of the safety residual space to quantify the extent of learned changes during safety-tuning at different layers, and finds differences between DPO and SSFT. It also looks at the singular vectors of the safety residual spaces, and uses a variant of layerwise relevance propagation to find patterns in which tokens are most important to these vectors, along with the importance of earlier-layer vectors to later-layer ones. The paper finds that ablating the first singular vector completely removes the model's refusal capability, while ablating later singular vectors cause the model to defend worse on specific jailbreak types. Finally, the paper examines jailbreak performance versues the number of jailbreak examples in the safety-tuning dataset, and finds that explicitly optimizing jailbreak examples based on information from the singular vectors makes it harder for the model to defend against them. (1) Note that it's unclear to me whether the *safety residual space* refers to the map itself, to the matrix $W$ associated with the map (i.e. the linear part of the affine map), to the matrix $W-I$, or to the images of any of these maps. See "Other Strengths and Weaknesses" for more on this. ## Update after rebuttal After the rebuttal process, I have changed my score to an accept. This is for the following reasons: * The authors clarified all of the terminology in the paper that I found unclear, which was one of the biggest reasons for my initial low recommendation. * The authors added new results regarding the accuracy of the learned linear transformation, filling one of the biggest gaps in their experiments. * The authors removed the incorrect/superfluous theorem from the paper, and replaced it with an updated, more sensible framing. The main idea of this paper -- that is, to investigate the difference between fine-tuned and base LLMs by learning an affine map between their activation spaces -- is very interesting, and now that the authors have addressed my concerns, I think that their paper putting forth this idea and performing investigations with it deserves to be accepted. Claims And Evidence: * In Section 4, the paper claims that "safety residual space is low-rank linear", on the basis of an empirically low effective rank of $W-I$, where $W$ is the matrix associated with the learned "safety residual space" affine map. * I believe that this claim is mostly supported. However, the paper never mentions the accuracy of the safety residual space map. If this map is inaccurate, then the results presented here might instead only mean that the *easily-linearly-approximable* changes in activations after safety-tuning are low rank -- while the "nonlinear" changes in activations could potentially be higher-dimensional. To address this, it would be useful for the paper to report the mean squared error between the activations of the safety-tuned model and the outputs of the safety residual space map, in addition to the mean squared norm of activations at this layer. (An even better (albeit slightly more complex) metric to compute would be the KL divergence between the safety-tuned model and the safety-tuned model whose activations at a given layer are replaced by the corresponding outputs from the safety residual space map.) * In Section 5.1, the paper claims that certain directions are activated by specific patterns of tokens (e.g. "we observe that the dominant direction primarily relates to harmful subjects", "For instance, tokens like *Imagine*, *fictional* and *hypothetical* in L14-C2 establish a hypothetical tone"). However, from what I can tell, all of this is inferred by looking at relevance scores *on a single input example*, which is not enough to confidently make statements about the patterns that cause a direction to be activated. I recommend that the authors refer to "AutoInterp" methods (e.g. [1]). All other claims in the paper seem well-supported as far as I could tell. The main other claims are as follows: * Section 4: "The dominant direction predicts aligned behavior". The claim is supported by Figure 3, showing that the top principal components of the safety residual space achieve high classification accuracies of harmful vs. harmless behavior. * Section 4: "multiple orthogonal feature directions can predict refusal behavior beyond the single dominant direction or probe vector". Mostly supported by the fact that as shown in Figure 3, early-layer "Best-of-N SSFT" predictions have accuracy around 80%, which is actually greater than the accuracy of the dominant direction predictor for these layers. (But the "Best-of-N SSFT" prediction accuracy no longer tells us anything about whether the non-dominant feature directions are useful predictors after around layer 15 or so, because the best-performing direction is the dominant direction. It would be better to exclude the dominant direction from among the Best-of-N SSFT directions.) * Section 5.1: The paper claims that they found a specific direction L14-C6 which seems to activate on jailbreaks similar to those produced by PAIR. This claim is supported by evidence that ablating this direction increases jailbreak success rate on PAIR jailbreaks from ~0% to ~50%, without increasing other jailbreak success rates above ~10% (Figure 4). * Section 5.2: "In most layers, directions primarily retain information from their counterparts in the previous layer. [...] In contrast, during early layers, directions exhibit a more dynamic pattern, receiving contributions from multiple directions in the previous layer.". This is well-supported by Figure 5, although explicit numerical metrics in the main text might further make this apparent. * Section 6: The paper claims that running their layerwise relevance propagation variant on vectors in the "safety residual space" identifies tokens that induce refusal on the model. The paper supports this by automatically constructing jailbreaking prompts that avoid these tokens (using an algorithm dubbed "Trigger Removal"), and find that even when such prompts are included in a safety-tuning dataset, the model is still weaker to these jailbreaks than other jailbreaking methods when their outputs are included in the safety-tuning dataset (Table 2). [1] https://transformer-circuits.pub/2023/monosemantic-features/index.html#global-analysis-interp-auto-acts Methods And Evaluation Criteria: **Weaknesses** * In general, it is difficult for me to evaluate the datasets used by the paper for evaluation, since much of the paper is unclear about which datasets are even being used for evaluation (see Question #3 from later in this review). * As someone unfamiliar with the layerwise relevance propagation literature, it was hard for me to evaluate the authors' extension of it, because the paper doesn't include a full description of their algorithm. * I believe that the idea of the "safety residual space" presented in the paper is sound. However, no evaluation criteria (or actual evaluations) are provided for it. It is thus impossible to judge the accuracy of the learned affine map (see the "Claims and Evidence" section of this review). * In Appendix C, the performance hit incurred by safety-tuned models is measured using perplexity increase, but this can be a difficult-to-interpret metric. Looking at model performance on datasets such as MATH, or alternatively using LLMs to evaluate outputs, might be better. * In Section 4, it is never explained what the Best-of-N BASE baseline is. Additionally, it would be more sensible to exclude the dominant direction from the Best-of-N SSFT predictor, because once the dominant direction becomes the most effective predictor, the two have the same accuracies, preventing us from understanding the accuracy of the non-dominant directions. * In Section 5, it is unclear how exactly partial layerwise relevance propagation scores are computed to me, a reader who is not familiar with the layerwise relevance propagation literature. I believe that readers who have a similar lack of familiarity with that area would be well-served by a detailed algorithm, possibly in the Appendix. * In Section 5.1, the approach of interpreting directions based on a single input example seems ill-advised. I would recommend that for each direction, the authors compute the tokens across all input examples that have the highest PLRP scores; then, to find patterns and assess the interpretability of such patterns, an approach such as auto-interp [2] could be used. * In Figure 6, the paper never says whether the plotted values are medians or means. Additionally, the figure gives no information about the distribution of the projection values (presumably dot products), even though it would be very useful to see whether these distributions have overlaps. **Strengths** * The safety-tuned models are trained via supervised finetuning and DPO. The resulting models were evaluated for ability to resist jailbreaks by measuring refusal percentage, and by using the "StrongREJECT evaluator" [1] to evaluate harmfulness of non-refused outputs. This is a standard, sensible evaluation. * Using effective rank to measure the extent of the change in activations at different layers between the safety-tuned and base models (Fig. 2) is a very solid approach. * In Section 5.2, the approach that the paper takes in determining the relevance of directions in adjacent layers is a sensible way of analyzing layerwise dependencies. * The experiment on "safety finetuned models under different exposure settings" whose results are displayed in Table 2 is a sensible way to measure the importance of trigger tokens to jailbreaks. [1] Souly, A., Lu, Q., Bowen, D., Trinh, T., Hsieh, E., Pandey, S., Abbeel, P., Svegliato, J., Emmons, S., Watkins, O., Toyer, S. A StrongREJECT for Empty Jailbreaks. arXiv preprint arXiv:2402.10260 [2] https://transformer-circuits.pub/2023/monosemantic-features/index.html#global-analysis-interp-auto-acts Theoretical Claims: I believe that the proof of Theorem 3.2 as given in Appendix A is incorrect. The offending lines are the following: > Given that $W^*$ converged with $J(W^*) \ge \sup_{W \in \mathcal{M}} J(W) - \epsilon$, it cannot contain any zero-utility directions. If it did, we could remove such a direction without affecting performance, contradicting the convergence assmuption. This is false: convergence to a local optimum does not imply that any change in the weights would yield a worse-performing network. A near-trivial example of this (for ReLU networks) is given in [1]: "in ReLU networks we can multiply the weights in one layer by a certain value, and divide the weights in the next layer by the same value, and the outputs of the network (and consequently loss) wouldn’t change." (In the associated paper [2], the authors of [1] find more complex examples of this phenomenon, known as "mode connectivity" in the literature and believed to be an important factor in why neural networks are able to generalize.) In fact, even if we did choose to make the stronger assumption (which is not empirically borne out in real neural networks) that there is no mode connectivity between the optimal solution $W^*$ and any other solutions, then the argument made in the proof would imply that *any* nonzero vector $v$ (rather than solely directions in the safety residual space) must contribute positively to the training objective on some input (in the sense that ablating $v$ from the model's weights would cause the model to perform worse on that input). The resulting conclusion would thus be trivial. [1] https://izmailovpavel.github.io/curves_blogpost/ [2] Garipov, T., Izmailov, P., Podoprikhin, D., Vetrov, D., and Wilson, A. Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs. Advances in Neural Information Processing Systems 31. 2018. Experimental Designs Or Analyses: See "Claims and Evidence" and "Methods And Evaluation Criteria" for experiments whose validity I checked. Supplementary Material: I reviewed Appendix A (the proof of Theorem 3.2). I also skimmed Appendix B, and skimmed the results part of Appendix C. Relation To Broader Scientific Literature: This paper is related to understanding at a mechanistic level the behavior induced by safety-tuning. It is related to papers that directly focus on understanding the activation spaces of safety-tuned models, such as [1] (which this paper cited), that finds a direction in the activation spaces of safety-tuned models that mediates whether harmful requests are refused or not. It is also related to papers that study this question from the perspective of parameter space, such as [2], which finds that safety-tuned models' parameters live in small "basins" in parameter space outside of which model harmfulness increases drastically. There is also some relationship between the methods and findings introduced in this paper and some previous literature in model comparison, which will be further elaborated upon in the "Essential References Not Discussed" section in this paper. [1] Arditi, A., Obeso, O., Syed, A., Paleka, D., Panickssery, N., Gurnee, W., and Nanda, N. Refusal in language models is mediated by a single direction. arXiv preprint arXiv:2406.11717, 2024. [2] Peng, S., Chen, P., Hull, M., Chau, and Chau, D. Navigating the Safety Landscape: Measuring Risks in Finetuning Large Language Models. 38th Conference on Neural Information Processing Systems, 2024. Essential References Not Discussed: The main concept used throughout the paper's analysis is the "safety residual space", defined using a learned affine mapping from the activations of a base language model to the activations of a safety-tuned language model. While not directly applied to comparing safety-tuned models with their base model counterparts, there have been previous works that investigate the use of learned affine mappings between the activations of two different models, particularly [1] and [2]. The notion of the "safety residual space" can thus be viewed as an adaptation of this prior approach to the setting of understanding safety-tuned models, and I thus recommend that this paper cite these prior works. [1] Lahner, Z., and Moeller, M. On the Direct Alignment of Latent Spaces. First Edition of the Workshop on Unifying Representations in Neural Models. 2023. [2] Maiorca, V., Moschella, L., Norelli, A., Fumero, M., Locatello, F., and Rodola, E. Latent Space Translation via Semantic Alignment. 37th Conference on Neural Information Processing Systems. 2023. Other Strengths And Weaknesses: **Strengths** The major strength of this paper, in my opinion, is the idea of learning an affine map between a pretrained and finetuned language model in order to mechanistically analyze the behaviors learned during finetuning. Even though learned maps between different models have already been considered in the literature (e.g. [1], [2]), the paper under review is the first that I've read that applies this to the same model pre- and post-finetuning. I could see this method being a useful tool for performing "model diffing"; it is thus to this paper's credit that it introduces such a method. I also like the idea of using a layerwise relevance propagation variant to assess which tokens are most important for causing each feature vector to activate. (Most mechanistic interpretability papers I've seen instead look at the tokens whose model activations have the largest dot products with the feature vector, which doesn't take into account tokens earlier in the context.) **Weaknesses** In addition to those addressed in earlier sections of this review (e.g. Theorem 3.2 seems to be incorrect, lack of information on evaluation datasets, etc.), one of the biggest is a pervasive lack of clarity and precision in the writing. Some examples of this are as follows: * In lines 71-75, left column: "A one-dimensional *feature* value $W$ (e.g., "gender", "harmfulness") is defined as a latent variable that instantiates context $w$ through a generator function, denoted as $w := \mathcal{G}(W)$." It is not explained anywhere previously in the text what it means to instantiate context through a generator function, and I have not seen this verbiage used anywhere prior in the literature (including in the paper Park et al. (2023) cited in the sentence prior). My best guess is that the authors are trying to say that a feature is a latent variable that causally affects what output text $w$ is sampled from a model; sampling from the model under the effect of $W$ and obtaining $w$ as output is denoted by $w := \mathcal{G}(W)$. * In lines 110-112, left column: "The safety residual space $V_\mathcal{T}$ is defined as the optimal affine transformation parameterized as $V_\mathcal{T} (x) = \mathbf{Wx} + \mathbf{b}$ that minimizes [...]". But in lines 128-129, the authors define $V_\mathcal{T}^{:k}$ to consist of the first $k$ singular components of $\mathbf{W} - \mathbf{I}$. Does this imply that $V_\mathcal{T}$ is equal to $\mathbf{W - I}$, is it equal to $\mathbf{W}$ (as the original definition suggested), or is it equal to the affine map $\mathbf{x} \mapsto \mathbf{Wx}+\mathbf{b}$? * In the paragraph titled "Dominant Direction Predicts Aligned Behavior" in Section 4, it is stated that "components directly extracted from the trained models' activations fail to predict refusal behavior, as evidenced by the Best-of-N BASE baseline". However, it is never explained what the "Best-of-N BASE baseline" is, or what it means for components to be directly extracted from trained model activations. Given that they also use a predictor called "Best-of-N SSFT" which is described as using the "highest accuracy among the first 100 components for each layer", one might infer that "Best-of-N BASE baseline" refers to using the highest-accuracy component of among the first 100 components *in the base model*. But since the term "component" has thus far been used to refer to a singular vector of $W-I$ where $W$ is defined in terms of a map from base model to safety-tuned model activations, it is unclear what meaning this term has in the context of the base model alone. These are a few examples of the overloaded/inconsistent terminology present all throughout the paper. (Due to the quantity of such instances, I cannot list or go into detail on every single example; note that some more are mentioned in the "Questions" section below.) Because this sort of writing is consistently present throughout the paper, it was thus very difficult for me to parse what methods exactly were being used or what exactly was being studied in certain experiments. I recommend that the authors reread this paper carefully, making sure that whenever a technical term is used (e.g. "residual space", "component"), it is true that 1) the term is defined somewhere and that 2) the current use of the term is consistent with the definition and with previous use. If the authors do this and revise the paper accordingly, I am confident that readers would be far better able to appreciate the good ideas presented in this paper. [1] Lahner, Z., and Moeller, M. On the Direct Alignment of Latent Spaces. First Edition of the Workshop on Unifying Representations in Neural Models. 2023. [2] Maiorca, V., Moschella, L., Norelli, A., Fumero, M., Locatello, F., and Rodola, E. Latent Space Translation via Semantic Alignment. 37th Conference on Neural Information Processing Systems. 2023. Other Comments Or Suggestions: I have none in particular. Questions For Authors: Note that the following list of questions is not exhaustive. Nevertheless, I believe that the paper should be edited with answers to them before I would consider raising my score. 1. Does the term "safety residual space" refer to the affine map from base model activations to safety-tuned model activations given by $x \mapsto Wx + b$, does it refer to only the linear part of the map $W$, or does it refer to $W-I$? 2. How was the probe vector in Section 4 computed? Was it computed via the same method on the same dataset as Arditi et al. 2024, or was anything different in the authors' setup in this paper? 3. In general, I couldn't find a clear statement of what dataset (and how many examples, etc.) was used for learning the safety residual space map, for evaluating effective rank (Fig. 2), and for measuring prediction accuracy (Fig. 3). Is this the same dataset that was used for finetuning the models in Section 3.2? 4. In Section 4, what exactly is the Best-of-N BASE baseline and how is it computed? 5. In Section 5, when PLRP is used, is the relevance value associated with each context token equal to the relevance of that context token for causing the direction to have a high dot product with a fixed token's activations? Is the relevance value with respect to the direction's dot product with all tokens in the context? Or something else? (More generally, I am personally not familiar with LRP, so I would find it useful for myself and for other readers unfamiliar with that specific approach to add a full description of the PLRP formulas somewhere in the paper or in an appendix.) 6. In Section 6, when describing dominant component suppression, the paper states "To preserve the model’s ability to refuse plainly harmful prompts, we exclude component directions with harmfulness correlations above 0.7." Does the term "harmfulness correlation" mean "correlation between the dot products of the vector with model activations and the binary variable of whether the input is harmful"? 7. In Section 6, how many non-dominant components are suppressed? Are all non-dominant components with harmfulness correlations above 0.7 suppressed, or only some? The following question is less vital to be answered, but I would still be interested in knowing the answer: 1. In Section 4, it is stated that "all components found have near-zero cosine similarity with the probe vector". What about if the mean of these components is taken -- does the mean have a higher cosine similarity? Less importantly, what are the exact cosine similarities? (It would make the most sense to put these values in an appendix rather than the main body.) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough and insightful feedback. We will clarify all your questions in the revised paper. We will start by answering your questions and then address the remaining issues. > **Q1:** Does the term "safety residual space" refer to the affine map, linear $W$ or $W-I$? According to Definition 3.1, **the safety residual space $V_T$ was defined as an affine map**. Separately, we used $V^{:k}_T$ to represent the first k right singular vectors of $W - I$. We recognize that overloading $V_T$ caused confusion. Following your feedback, we will: - Define the safety residual space map with a distinct symbol, changing $V_T(X)$ to $S(X)$. - Consistently use $V$ only for the singular vectors of $W-I$. > **Q2:** How was the probe vector in Section 4 computed? We applied the exact method described in Arditi et al. (2024) to our training dataset. While the resulting probe vector has different values than in Arditi et al. (2024) due to the different dataset, it carries the same meaning. > **Q3:** I couldn't find a clear statement of what dataset was used for learning the safety residual space map and evaluation. Thank you for pointing this out. As stated in L124 left, **the *same* training dataset (2600 samples, detailed in Figure 7) was used for both finetuning the model and learning the map**: *"We compute activations from the training data as an approximation of the unaligned distribution $X_u$"*. Specifically, this training set was used to learn the residual space map (Figure 2). All prediction and intervention evaluations (Figures 3-6) were performed on the test set. We will state this more prominently. > **Q4:** In Section 4, what exactly is the Best-of-N BASE baseline and how is it computed? Apologies for the lack of clarity. As referenced in L195 right ("components directly extracted from the trained models’ activations..."), we follow the process described in L126 left, using SVD to extract components. **We perform SVD on the stacked base model activations from the training data and select the singular vector yielding the highest refusal prediction accuracy on the test set**. We will add this explicit definition. > **Q5:** What does relevance compute in PLRP? PLRP computes the relevance of each context token for a direction vector (e.g., L14-C1) having a high dot product onto the activation at the *last token position* in the specified layer (e.g., Layer 14). The relevances are back-propagated from this single position to all context tokens. We described PLRP in Section 5, but acknowledging that LRP methods might be unfamiliar, we will add a more detailed algorithm description and background in the Appendix, as suggested. > **Q6:** Does the term "harmfulness correlation" mean ... Yes, that is correct. We will revise the text to use this clearer phrasing. > **Q7:** Section 6, how many non-dominant components are suppressed? This is a good question. We will add the following figure in the revision to clarify. Please refer to the caption. **[[View figure of harmfulness correlation]](https://shorturl.at/ncMfk)** **Other Issues** > 1. No evaluation criteria are provided for accuracy of the safety residual space map You raise a good point. We will report these evaluation results in the revision. Due to length limit, please see our rebuttal to Reviewer 6Hnk **Q1**. > 2. The activation pattern is inferred by looking at relevance scores _on a single input example. We will add an aggregated evaluation to strengthen our claims. Please see rebuttal to Reviewer 6Hnk **Q6**. > 3. In Figure 6, the paper never says whether the plotted values are medians or means and ... The plotted values are **means** across the test dataset. We will clarify this in the caption and add the following figure showing distributions to the Appendix. **[[View figure of projection distribution in Figure 6]](https://shorturl.at/4HzSc)** > 4. The proof of Theorem 3.2 as given in Appendix A is incorrect Your analysis is insightful. We agree that the convergence assumption does not preclude zero-utility directions, especially considering mode connectivity. **We will correct the theorem's statement and proof**. Specifically, we will change the quantification from "For any residual direction $v \in V_T$ ..." to "There exist residual directions $v \in V_T$..." and update the proof accordingly. > 5. Essential References Not Discussed Thank you for highlighting these references. We will incorporate citations and discussion into the Related Work section, and we will **definitely** perform a careful pass to ensure all technical terms are clearly defined upfront and used consistently. Nevertheless, we are grateful the reviewer found our idea novel and supported by solid experiments. We hope these clarifications and proposed revisions address your concerns, and we would appreciate it if you would reconsider your evaluation based on our response. --- Rebuttal Comment 1.1: Comment: Thank you for taking the time to address my concerns. The clarifications that you provided in your rebuttal definitely help my understanding of the paper (and hopefully will help other readers as well), and the additional results very much strengthen your argument. (The high interpretability of the top aggregated LRP tokens in particular was very exciting to see.) My main remaining sticking point with the paper is Theorem 3.2. I believe that the paper would be improved by omitting it entirely, since in its corrected form, the result is essentially trivial. If anything, it only distracts from the interesting empirical results that you have found. Given your rebuttal, I am changing my recommendation from Reject to Weak Accept. Removing Theorem 3.2 would cause me to change my recommendation to Accept. --- Reply to Comment 1.1.1: Comment: We sincerely thank you again for the thoughtful and constructive suggestions, which have significantly contributed to the paper. We understand and agree with your suggestions that the updated theorem 3.2 has limited practical implications for the experiment and results. In this paper, our theorem mainly provided insight and motivation for studying components in the residual space as feature directions, bridging the two concepts. **Following your suggestion, we will revise and replace Theorem 3.2 with a more direct and qualitative hypothesis**. Specifically, we will update Section 3.1 as follows: (Note $V_\mathcal{T}$ follows the revised definition, singular vectors of $W-I$) >A key question is whether the components in the residual space contain interpretable features, similar to probe vectors. Conceptually, the safety finetuning optimizes the model to produce safer outputs. This process induces activations to shift along specific directions to align with safety objectives, which we capture with $V_\mathcal{T}$. These direction in $V_\mathcal{T}$ are strong candidates for feature directions under the definition in Equation 1, as they increase probability of safe output when activations are moved along those directions. While this does not guarantee human-interpretable features, it suggests $V_\mathcal{T}$ is a promising source for automatically discovering safety-related feature directions without requiring probing data pairs. To generalize this idea, we have the following hypothesis: > > >**Hypothesis 1. (Finetuning Residuals as Feature Directions)** The principal components representing the activation shifts induced by safety finetuning contains safety-related feature directions. Furthermore, orthogonal directions within this space potentially represent distinct and interpretable safety features. > > >In the following sections, we verify this hypothesis by examining the top components of $V_\mathcal{T}$. We study (1) if the components in $V_\mathcal{T}$ are feature directions and (2) what specific features these directions represent. We will also update the introduction and remove the proof in the appendix accordingly. In this way, **we can preserve the theoretical insight while focusing more on the empirical evidence**, which we believe will better highlighting our findings and contributions.
Summary: This paper investigates the multi-dimensional nature of safety-aligned behaviors in LLMs, challenging the traditional single-direction representation of safety features. The authors introduce the concept of a safety residual space, analyzing activation shifts during safety fine-tuning of Llama 3 8B. Through singular vector decomposition and layer-wise relevance propagation, they identify a dominant refusal direction alongside smaller, interpretable directions related to role-playing and hypothetical narratives. The study further explores how these secondary directions influence refusal behavior and demonstrates that removing trigger tokens from harmful prompts can weaken safety alignment, highlighting potential vulnerabilities. Claims And Evidence: Overall, the paper presents strong and convincing evidence for its claims. The findings are well-supported by both theoretical derivations and empirical experiments, though some assumptions (e.g., the full linearity of safety features) may require further scrutiny. Expanding validation beyond Llama 3 and incorporating potential countermeasures against jailbreak attacks would further enhance the robustness of the study. Methods And Evaluation Criteria: The chosen methods and evaluation criteria are appropriate for the research problem, providing valuable insights into multi-dimensional safety representation in LLMs. However, further validation on larger models and real-world safety scenarios would strengthen the generalizability of the findings. Theoretical Claims: The paper provides a formal proof of Theorem 3.2, establishing that a converged model does not retain zero-utility directions in the learned residual shift space. The theoretical claims appear logically valid and are backed by rigorous derivations. Experimental Designs Or Analyses: The evaluation of Vt is well-structured, systematically examining whether all retained directions contribute positively to the training objective. However, it would be beneficial to include an ablation study that explicitly removes certain directions and observes the impact on model performance, further reinforcing the claim that all retained directions are necessary. Regarding dataset construction, the inclusion of multiple jailbreak methods ensures a diverse set of adversarial examples, making the evaluation more comprehensive. However, further details on data balancing and sampling strategies could be provided to ensure fairness in evaluating different attack methods. Overall, the experimental design is well-motivated and provides strong empirical support for the theoretical claims. Small refinements, such as broader model comparisons, could further enhance the robustness of the conclusions. Supplementary Material: Yes, I have read all parts of the supplementary materials. Relation To Broader Scientific Literature: This paper makes a meaningful contribution by extending prior work in LLM security and robustness while introducing novel insights into the structure of residual shifts and their role in adversarial mitigation. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. Originality: The integration of safety fine-tuning and residual shift analysis to address jailbreak attacks is a novel contribution. The method of iterative trigger removal is innovative and adds value to the field. 2. Practical Relevance: The method directly addresses the growing concern of adversarial robustness in large language models, making it highly relevant for both academic and industry applications. Weaknesses: 1. Limited Evaluation: The empirical evaluation is not extensive enough. More detailed experiments and comparisons with existing methods are needed to strengthen the paper's claims. 2. Scalability: The iterative trigger removal process may be computationally expensive. A discussion of scalability and potential optimizations would improve the paper. 3. Notation Clarity: Some mathematical terms and notations, such as zero-utility directions and residual shifts, could be clarified for better accessibility. Other Comments Or Suggestions: refer to the weakness Questions For Authors: refer to the weakness Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the constructive feedback. We appreciate that the reviewer found our core claims well-supported and our analysis novel. We will address each of your concerns below. >**Q1:** Limited Evaluation: The empirical evaluation is not extensive enough. Thank you for these valuable suggestions. We will address each point individually. **Evaluation on larger models (Methods And Evaluation Criteria).** Our evaluation focused on Llama 3.1 8B as a case study for mechanistic interpretation. We agree that evaluation on models of different scales would further strengthen our claims. To this end, we now include a safety residual space analysis for **Llama 3.2 3B** to verify that our findings generalize across scales. As shown in the figure below, the residual space for Llama 3.2 3B shows similar low-rank linearity and interpretability to Llama 3.1 8B, which clearly support the generalizability of our findings. **[[View figure of analyzing Llama 3.2 3B]](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/3VaIR7muY-H_EalTIQhfh.png)** We will add this additional figure to the Appendix in the revised paper and reference it in the main body. **Evaluation on real-world safety scenarios and comparisons with existing methods. (Methods And Evaluation Criteria)** Thank you for this insightful feedback. Our current evaluation covers over 6 strong jailbreak baselines, representing different real-world adversarial attack strategies. We also compared the effectiveness of our trigger removal attack against these jailbreak methods. We agree that expanding the scope of our safety residual space analysis to more scenarios would be valuable, and we plan to explore this in future work. **Evaluation on the full linearity of safety features. (Claims And Evidence)** Thank you for this insightful suggestion. Our framework does not strictly require that safety features are fully linear. In the following table, we evaluate how well activation shifts during safety finetuning can be approximated by linear affine maps. We found that the Mean Square Error between finetuned activations and the approximation is negligible compared to the Norm of activations. This suggests **safety finetuning mostly introduces linear safety features**. **[[View table of residual space map MSE]](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/CPfaKqhmqgHIv9V9HRx8j.png)** We will report these results for our learned linear map in the revision to strengthen the claims regarding the linearity of safety features. > **Q2:** Scalability: The iterative trigger removal process may be computationally expensive. Thank you for this suggestion. While iterative trigger removal serves mainly as an analytical tool, **its computational cost (at most 30 attempts per sample in our settings) is comparable to other iterative methods** like TAP [1] (avg. 35 attempts) and PAIR [2] (avg. 37 attempts). The computational requirement is feasible in practice. We will add a comparison to these baselines in the paper, following the description of the trigger removal algorithm. > **Q3:** It would be beneficial to include an ablation study that explicitly removes certain directions. Thank you for pointing this out. We believe there may be a misunderstanding, as we did provide an ablation study removing certain directions to measure changes in model behavior (Figure 4 for refusal ability and Appendix C.4 for general ability). We will make the reference to this study clearer in the revision. > **Q4:** Further details on data balancing and sampling strategies could be provided. Thank you for your constructive suggestions. Details on our data balancing and sampling processes are provided in Appendix C.1. We will add a more direct reference to this section in the main text to make this information more visible. > **Q5:** Notation Clarity: Some mathematical terms and notations, such as zero-utility directions and residual shifts, could be clarified for better accessibility. Thank you for pointing this out. We will clarify these terms in the revised paper to improve accessibility: - *Residual shifts* refer to the change in activations within the safety residual space before and after finetuning. - *Zero-utility directions* refer to directions in the safety residual space that do not contribute to changing the model's safety behaviors. In the revision, we will add these definitions where the terms are first introduced. [1] Mehrotra, Anay, et al. "Tree of attacks: Jailbreaking black-box llms automatically." Advances in Neural Information Processing Systems 37 (2024): 61065-61105. [2] Chao, Patrick, et al. "Jailbreaking black box large language models in twenty queries." arXiv preprint arXiv:2310.08419 (2023).
null
null
null
null
null
null
GrokFormer: Graph Fourier Kolmogorov-Arnold Transformers
Accept (poster)
Summary: This paper propose to introduce a novel Kolmogorov-Arnold networks (KAN) based spectral filters to graph Transformer framework to enhance the flexibility to perform low/high/band-pass filtering. Compared to the previous polynomial spectral graph neural networks (GNNs) as well as the proposed graph KAN spectral filter has orthogonal polynomial bases, better convergence as well as global graph modeling. Compared to the non-polynomial spectral GNNs, Specformer, the proposed spectral filter can consider the higher-order of the eigenvalues in an explicit manner, leading to more flexible construction. The authors also show empirically that adding an linear-attention module to the spectral filters can lead to better empirical performance. ## update after rebuttal I have updated the scores according to the rebuttal of the authors. Details please see the rebuttal comments Claims And Evidence: 1. The authors claim that Specformer cannot consider spectrum at different orders while the proposed method can. - This is not well justified since in Specformer, there are MLP applied to each eigenvalue $\lambda_i$ which can learn to output $\lambda_i^k$ for $k>0$ theoretically, given by the universality of MLP. Therefore, this claim is problematic. 2. The authors claim that self-attention preserves only low-frequency signals in graph features in abstract. - This is not well supported. It is true that each attention map is equivalent to a low-pass filter. However, in multi-head attentions, a linear combinations of different low-pass filters and all-pass filters (i.e., residual connection) might generate band/high-pass filters. 3. The authors claim to propose a novel graph Transformers, which is more flexible and expressive. - First, the proposed networks is more like a spectral GNNs plus a linear-attention. The "transformer" component is not well justified. (The differences with/without attention in the ablation studies are not even staistically significant from T-test). - There is no theoretical and empirical support that the proposed model is more expressive than the recent graph Transformers such as GraphGPS (Rampasek et al., 2022), GD-Graphormer (Zhang et al., 2023), GRIT (Ma et al., 2023). - In the baselines, there are some graph Transformers known for better scalability but not on top of the expressivity. The comparison to them cannot support the claim on stronger expressivity. - Stronger ability to generate filters of different frequencies does not necessary lead to stronger expressivity on distinguish different graph-structure. Wang et al., (2022) shows that the expressivity of polynomial spectral GNNs are not more expressive than 1-WL. CKGConv (Ma et al., 2024), shows that applying non-linearity transformation on the polynomial bases can lead to stronger expressivity beyond 1-WL. There are no similar discussion in the paper to support the statement of "more expressive" -------------- > only the references not included in the paper are listed - Zhang, Bohang, et al. "Rethinking the Expressive Power of GNNs via Graph Biconnectivity." The Eleventh International Conference on Learning Representations. - Ma, Liheng, et al. "Graph inductive biases in transformers without message passing." International Conference on Machine Learning. PMLR, 2023. - Wang, Xiyuan, and Muhan Zhang. "How powerful are spectral graph neural networks." International conference on machine learning. PMLR, 2022. - Ma, Liheng, et al. "CKGConv: General Graph Convolution with Continuous Kernels." International Conference on Machine Learning. PMLR, 2024. Methods And Evaluation Criteria: 1. The datasets utilized are known for the lack of training examples, which are less suitable to evaluate the capacity of complicated models such as graph Transformers. The overfitting issues might dominate the performance improvements. These datasets can justify that the proposed method introduce better inductive bias. However, using these datasets only cannot well support the capacity statement of the proposed GrokFormer. 2. In the literature of graph Transformers, the datasets from BenchmarkingGNNs (Dwivedi et al., 2023) and Long Range Graph Benchmarks (Dwivedi et al., 2022) are more widely used to evaulate the capacity and expressivity of the graph models. (They have been shown more effeictive to evaluate the ability of modeling graph-structure compared to TU datasets). However, the paper does not utilize those datasets and weaken the experimental conclusions. 3. The proposed method claims that the proposed spectral filter can introduce the stronger capacity to graph Transformers. However, the compared graph Transformers are more known for better scalability instead of stronger expressivity. Most existing expressive graph Transformers are ignored in the comparison, including but limited to GraphGPS (Rampasek et al., 2022), GD-Graphormer (Zhang et al., 2023), GRIT (Ma et al., 2023). (listed in previous section) Epesicallly for GraphGPS and GRIT, the former one is also hybrid Transformer architecture (MPNN+self-attention) and the latter one reach SOTA performance on various datasets with pure Transformer architecture. 4. The filter fitting experiments does not include graph Transformers in comparisons. Theoretical Claims: 1. Listed in the (1) of **Claims And Evidence** 2. The Complexity claim has some issues. - When consider the full spectrum ($N$ eigenvalues/eigenvectors), the forward pass of the proposed method is actually $O(N^3)$ ($O(N^2 M)$ with $M=N$) on eq. (8), which is missed in the calculation of the overall forward complexity. 3. In table 1, there are two properties 'order-adaptive' and 'spectrum-adaptive'. These properties are actually vague. They are viewed as positive feature of the work without clear definition and support. - In Polynomial Spectral GNNs, the *order-adaptive* is referred to the polynomial approximation to the full spectrum, which is an approximation from the spatial domain. This is a trade-off between the flexibility and the computational cost. - In Specformer, the *spectrum-adaptive* is referred to the usage of a subset of spectrums (eigenvalues), which is an approximation from the spectral domain. - In Graph KAN spectral filter, *order-adaptive* is referred to the usage of the $k$-power of eigenvalues. However, this is different from the *order-adaptive* in Polynomial Spectral GNNs and can neither lead to better computational cost nor leading to a tighter approximation. - In fact, it is arguably that the proposed KAN-based spectral filter can be well approximated by the transformer-based spectral filter in the specformer due to the universal approximation. (However, the performance might be different in practical training). Experimental Designs Or Analyses: 1. The ablation study cannot well justify the necessary of the efficient self-attention. - the performance difference between the graph Fourier KAN and Full model is not statisically significant, indiciating that the efficient-self-attention is not necessary. - consider using other datasets with stronger distinguishability on the model capacity. Supplementary Material: Yes, all. Relation To Broader Scientific Literature: 1. The proposed method does not provide key contribution to the literature of graph Transformers as well as Transformers. This unmatch the main statement of the paper. 2. The key contribution of the proposed method is proposing a novel parameterization of the graph spectral filters and spectral GNNs, which endows with several desired properties. Essential References Not Discussed: No Other Strengths And Weaknesses: **Weaknesses** 1. The proposal of the graph Fourier KAN spectral filters are novel and interesting. However, the injection of efficient self-attention is not well supported, which in fact weaken the paper. A simple A+B technique combination without good motivation and theoretical/empirical support lead to a negative viewpoint. 2. No expressivity discussion on the proposed method. Other Comments Or Suggestions: 1. In graph theory, the term ``graph order'' is widely used, which stands for the number of nodes in a graph. This work uses this term mistakenly. 2. Arguably speaking, Specformer (Bo et al., 2023) shall be considered as a spectral GNNs with Transformer/self-attention encoders rather than graph Transformers, which are supposed to learn graph-structure data directly with Transformer architecture (self-attention + FFN). Questions For Authors: 1. The method seems to focus on the capacity and expressivity of the model. Why not use full self-attention? What is the main goal of using the linear-attention which is theoretically weaker than the regular self-attention. 2. How necessary for the attention modules in the work? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive and positive comments on the filter design. Please see our response to your comments one by one below. >**Question #1** In Specformer, MLP applied to each eigenvalue $\lambda$, which can learn to output for $\lambda^k$ theoretically. We appreciate your insightful comment. While applying MLP to $\lambda$ in Specformer can help theoretically capture frequency information about $\lambda^k$, it does not necessarily enable the MLP in Specformer to effectively model spectral information across arbitrary filter orders $k$ due to its limited parameters and the lack of explicit learning for unknown $\lambda^k$. In contrast, our GrokFormer explicitly captures a diverse range of frequencies across different filter orders, ensuring a more expressive representation in the spectral domain. >**Question #2** In multi-head attention, the residual connection might generate band/high-pass filters. We really appreciate your comments. We'd like to clarify that [Ref1] has shown that *although multi-head, FFN, and skip connection all help preserve the high-frequency signals, none would change the fact that MSA block as a whole only possesses the representational power of low-pass filters* (see page 5 of [Ref1]). The theory is further supported by the experiments on the ZINC dataset in [Ref2]. These findings confirm that Transformer is essentially a low-pass filter, motivating our development of expressive spectral filters for graph Transformers. - [Ref1] Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice. ICLR, 2022. - [Ref2] How Expressive are Transformers in Spectral Domain for Graphs? TMLR, 2022 >**Question #3** How necessary for the attention modules in the work? (Lack of statistical analysis from t-test). In Table A1, we include a new dataset, Penn94 in ablation study and report the $p$-Value using t-tests on all datasets. ``` Table A1. Additional ablation study on Penn94. ``` | |Penn94|$p$-Value| | --- | --- | --- | |Self-attention-E|76.29|0.001| |Graph Fourier KAN|81.36|0.002| |GrokFormer|83.59|-| Self-attention-E can generally capture global spatial relationships, preserving the similarity of node features. However, on the large-scale dataset penn94 with a low homophily level, relying solely on spectral information from Graph Fourier KAN may not be sufficient. Only when both Graph Fourier KAN and self-attention are integrated, GrokFormer can achieve consistently improved performance, with statistically significant differences ($p$ < 0.05) on the ablated GrokFormer models. This demonstrates that our GrokFormer is more than just a simple combination of self-attention and Graph Fourier KAN. >**Question #4** The theoretical discussion of strong expressivity on WL test. Please see our detailed response to Reviewer axD1's Weakness #3. >**Question #5** and **Question #6** Some expressive graph Transformers like GraphGPS and GRIT are overlooked in comparisons. Please see our detailed response to Reviewer axD1's Weakness #1 and #2. >**Question #7** The filter fitting experiments do not include graph Transformers in comparisons. Both Polyformer and Specformer design filter functions h($\lambda$) in the spectral domain, which can be regarded as spectral graph Transformers based on a Transformer architecture. Besides Specformer, we added the Polyformer in filter fitting experiments to further verify the learning capability of our filter. Please refer to Table A3 in our response to Reviewer axD1's Weakness #4. >**Question #8** 'order-adaptive' and 'spectrum-adaptive' are vague. The term 'graph order' is used mistakenly. We would like to clarify the "order-adaptive" in this paper is actually 'filter order-adaptive' in the spectral domain, rather than the spatial-domain order or 'graph order'. It refers to using the $k$-power of eigenvalues to approximate filters, analogous to polynomial filter approximation in the spectral domain. 'spectrum-adaptive' indicates that a full (or subset of) spectrum (eigenvalues) is used to approximate spectral filters, similar to the ability of the filter in Specformer. We will give a clear description to avoid the misunderstanding in our final version. >**Question #9** Complexity claim The overall forward complexity of GrokFormer is $O(N(Nq+d^2)+KqM)$, where we have $q=N$ on small-scale datasets. It is lower than that of related methods, e.g., Specformer with a complexity of $O(N^2(q+d)+Nd^2)$. >**Question #10** Why not use full (regular) self-attention? We replaced linear self-attention with regular attention (G-Reg) and conducted experiments, which show that both achieve comparable performance (Table A2), as they both capture non-local spatial features. We chose linear attention for its better efficiency. ``` Table A2. Accuracy comparison on attention type. ``` | | Cora |Squirrel |Actor| | --- | --- | --- | --- | |G-Reg|89.43| 65.32|42.75| |GrokFormer |89.57| 65.12|42.98| --- Rebuttal Comment 1.1: Comment: Thanks for the response. > Q1: (+) I agree that GrokFormer has an explict control on the filter-orders. However, it indicates the `filter-orders` are pre-determined by human rather than are not **adaptively learned** from data. In contrast, Specformer is the one trying to learn filter-orders from data, even though it might not successfully do so. Since you show that Graph Filter KAN (GFKAN) reaches better empirical performance, I think this point is good except that you might need to rethink about the choice of word "adaptive", because the adaptive usually means "automatically learning from data" rather than "pre-determined as hyperparameter". > Q2: Good > Q3 Good > Q4: "the spectral graph filter designed in GrokFormer is not based on the spatial-domain WL test. The GrokFormer is a spectral GT operating in the spectral domain while 1-WL or GD-WL algorithms are essentially spatial methods based on the graph structure" I disagree with this point. The concept of expressivity is not specific to the spatial domain. It is a fundamental evaluation of the ability of graph models to distinguish different graph structures. However, I agree that it might be difficult to directly compare against WL families. But I still encourage the authors to figure this out. There are two potential routes to demonstrate the expressivity: 1. (Theoretically) Wang \& Zhang (2022) provide a theoretical analysis on polynomial-based spectral GNNs on expressivity compared to 1-WL. This might provide some hints for you. 2. (Empirically) Wang \& Zhang (2024) provide a benchmark to empirically evaluate the expressivity of graph models (ranging from 1-WL to 4-WL). - Wang, Xiyuan, and Muhan Zhang. "How powerful are spectral graph neural networks." ICML 2022. - Wang, Yanbo, and Muhan Zhang. "An Empirical Study of Realized GNN Expressiveness." ICML 2024 > Q5 and Q6 #### First, Could you also provide the training accuracy on the experiments? WikiCS, Pubmed, Squirrel and Actor are datasets of a single graph. The **small scale nature of these datasets (in terms of the number of examples)** might be **insufficient to train graph Transformers**. Given the fact that GRIT and GPS has much better performance on Peptides-func, it is reasonable to question whether the worse performance is due to the worse capacity or overfitting issues. (Note that the overfitting issue is very specific to the dataset rather than the model itself. It usually hints the stronger capacity of the model.) #### Second, considering the scale of the datasets utilize, please compare with other graph Transformers on ZINC, SP-CIFAR from [Ref1] to fully support your claim I ask for this since you propose a method as a graph Transformer. (Graph) Transformers are widely acknowledged to have stronger capacity, at the cost of a large amount of training data. > Q7 Honestly, PolyFormer and SpecFormer are arguably not regular Graph Transformers (which use regular attention mechanisms to process graphs directly). PolyFormer computes the `attention` using $\text{tanh}(\mathbf{Q}\mathbf{K}^\intercal) \odot \mathbf{B}$ instead of Softmax. SpecFormer is *de facto* a spectral GNN with a Transformer encoder for eigenvalues, not a graph Transformers. However, it is fine at this stage. But it would be better for you to clearly distinguish the differences among them. > Q8 Good > Q9 Good > Q10 Good > Final Regarding your response, I will raise the score to 2 temporarily. If you could address Q5 and Q6, I will raise the score to 3. The Q4 might be difficult to address for now, but I encourage you to do so in the final version. --- Reply to Comment 1.1.1: Comment: Thank you very much for raising the score. We greatly appreciate your further comments and it is great to know that our response has addressed most of your questions. Please find our response to your remaining concerns below. >**Follow-up question with Question #1 #** About the adaptivity in 'filter-orders.' Thank you very much for pointing this out and the valuable suggestion. We would like to clarify that 'adaptivity' in the paper is to stress the adaptivity of combining varying orders of the filters. This adaptivity is achieved through the learnable parameters {$\alpha_k$} that enables an adaptive synergy of $K$ filter bases {$b_k(\lambda)$}, yielding the order-adaptive filter $h(\lambda) = \sum_{k=1}^{K}\alpha_kb_k(\lambda)$, where $k$ is an order of a filter. >**Follow-up question with Question #4 #** The concept of expressivity is not specific to the spatial domain. We greatly appreciate the comments and the related literature you provided. We agree that expressivity is not limited to the spatial domain but is a fundamental measure of a graph model's capability. In this study, we have provided spectral analysis to show that our designed filter possesses universality and flexibility, and we will explore the two directions you suggested to have more in-depth discussion on the expressivity concept and the comparison with the WL family in our final version. >**Follow-up question with Question #5 #** The training accuracy on the experiments? Following your suggestion, we report the accuracy of each method on the training set in Table A4. As the results show, both GraphGPS and GRIT achieve very high training accuracy on these small-scale node classification datasets, but their performance on the test sets is inconsistent with the training set. This can be attributed to the fact that these graph transformer-based methods dedicated to achieving higher expressive power are prone to overfit on these small datasets, leading to poor generalization on the test set. In contrast, GrokFormer focuses on designing an expressive spectral graph filter to capture diverse signals beyond low-frequency components in attention mechanisms, achieving better generalization ability on these datasets. ``` Table A4. The training accuracy on four small-scale datasets. ``` | | WikiCS|Pubmed|Squirrel|Actor| |--|--|--|--|--| |GraphGPS|0.9848|1.0|0.9405|0.9512| |GRIT|0.9814|1.0|0.9532|0.9781| |GrokFormer|0.9631|0.9502|0.9839|0.6275| >**Follow-up question with Question #6 #** Compare with other graph Transformers on ZINC, SP-CIFAR from [Ref1] to fully support your claim Following your suggestions, to fully exploit the impact of the scale of the datasets and the capacity of the model, we further conducted the experiments on two additional datasets ZINC and SP-CIFAR, and the results are shown in Table. A5. ``` Table A5. Comparison on two additional datasets (MAE on ZINC, classification accuracy on SP-CIFAR). ``` | | ZINC |SP-CIFAR | | --- | --- | --- | | Graphormer |0.122|-| | GraphGPS |0.070| 72.31 | | GRIT |0.060| 75.67 | | GrokFormer |0.076| 73.75 | From the results, we observe that GrokFormer achieves better performance than Graphormer and performs comparably well to GraphGPS. GrokFormer slightly underperforms GRIT on ZINC while the performance margin on SP-CIFAR is relatively large. Considering all the empirical results, including both those on the suggested datasets and the small-scale datasets, it is clear that there is no method that consistently achieves the best performance across all datasets. GRIT is designed to improve GT's expressiveness in large datasets by incorporating graph inductive biases, so it can achieve more favorable performance on ZINC and SP-CIFAR but it has suboptimal performance on small-scale datasets due to over-fitting. GraphGPS achieves better performance on ZINC but shows suboptimal results on SP-CIFAR and the small-scale datasets compared to GrokFormer. As far GrokFomer, we aim to enhance the GT's capability to capture various frequency information on graphs by designing an expressive filter, which offers a relatively better balance between the generalization ability and the expressiveness, as demonstrated by the empirical results above. We hope this clarification helps address your concerns. >**Follow-up question with Question #7 #** PolyFormer and SpecFormer are arguably not regular Graph Transformers We really appreciate your insight into this problem. We will clarify this point in the final version of the paper. Your comments have been tremendously helpful in enhancing our work. Thank you very much again for your great efforts and time on our paper.
Summary: This paper proposes GrokFormer, a Transformer-based graph spectral model that introduces the expressive graph filter to the Transformer architecture, effectively capturing a wide range of frequency signals in an order- and spectrum-adaptive manner. Experiments on synthetic and real-world datasets show the effectiveness of the proposed method. ## update after rebuttal The authors' responses have addressed my concerns. Hence, I'd like to keep my original score. Claims And Evidence: Yes. The authors conduct extensive experiments on synthetic and real-world datasets. And the experimental results demonstrate that GrokFormer can learn an expressive filter and outperform baselines on node-level and graph-level tasks. Methods And Evaluation Criteria: Yes. The evaluation criteria in this paper follows the same setting in previous methods. Theoretical Claims: The authors provide theoretical analysis and empirical evidence, showing the improved flexibility and expressivity of the GrokFormer filter over Specformer and Polynomial filters, which seems sound to me. Experimental Designs Or Analyses: I think the designs of experiments are reasonable. Supplementary Material: There is no supplementary material in this paper. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The motivation is reasonable and the idea is easy-to-follow. 2. The paper develops a new graph Transformer that introduces a novel graph spectral filter learning approach in the Transformer architecture, which effectively capture a wide range of frequency signals in an order- and spectrum-adaptive manner. 3. Comprehensive experiments are convincing in my opinion. 4. The paper is clearly organized and well-written. Weaknesses: 1. The eigenvalue decomposition brings significant computational costs compared to polynomial paradigm. The authors are suggested to include the time consumption of preprocessing in Table 8, and demonstrate whether the proposed method achieves a trade-off between computational cost and expressiveness. 2. Clarification from the authors about some guidelines to selecting order K could be helpful. 3. The authors are encouraged to include a deeper discussion between GrokFormer, PolyFormer and Specformer, as they are all spectral graph transformer methods. Other Comments Or Suggestions: N/A Questions For Authors: 1. In Figures 4 and 7, as the order K increases, the performance on some datasets appears to change only marginally. Alternatively, clarification from the authors about some guidelines to selecting order K could be helpful. 2. How to understand the benifits of higher-order filter bases in Eq. (6) ? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive and positive comments on methodology and experiment designs. Please see our response to your comments one by one below. >**Weakness #1** The time consumption of preprocessing should be included in Table 8, and whether the proposed method achieves a trade-off between computational cost and expressiveness. Thanks for your valuable suggestion. Following your suggestions, we measured the computational overhead (including preprocessing) on the small-scale dataset Squirrel and the large-scale dataset Penn94, with the results summarized in Table A1 below. ``` Table A1. The computational cost in terms of GPU memory required in MB, training runtime, and pre-processing time (s) on two datasets. ``` | Dataset|Methods |Memory (MB)|Forward Times (s)| Decomposition Time (s)| | --- | --- | --- | --- |---| |Squirrel|PolyFormer| 8678|42.38|/| ||Specformer| 1951|6.45|3.26| ||GrokFormer|1424|6.12|3.26| |Penn94|PolyFormer| 14113|121.78|33.14| ||Specformer| 5053|9.39|746.32| ||GrokFormer|4647|8.13|746.32| It is clear that our GrokFormer and Specformer are comparably fast, both of which requires much less computational cost than PolyFormer. Additionally, accuracy results of node classification on Squirrel show that spectral decomposition methods, such as GrokFormer and SpecFormer, achieve approximately a 20\% improvement over the polynomial filtering model PolyFormer. On the large-scale graph dataset Penn94, while preprocessing takes some time, the forward-pass complexity can be reduced using truncated spectral decomposition. Moreover, since spectral decomposition is computed only once and reused during training, its cost becomes a minor one compared to the forward-pass cost when training for many iterations. >**Weakness #2 and Question #1** Clarification from the authors about some guidelines to selecting order $K$ could be helpful. Please see our detailed response to ***Reviewer Cg7z’s Suggestion #1 and Question #3***, where we clarified the concern regarding the selection about filter order $K$. >**Weakness #3** A deeper discussion between GrokFormer, PolyFormer, and Specformer should be included. Specformer designs a learnable filter by performing self-attention on each eigenvalue (spectrum), which is a spectrum-adaptive spectral method. PolyFormer designs an adaptive filter by performing self-attention on $K$ bases with fixed filter order. Different from these two methods, GrokFormer meticulously designs a universal order- and spectrum-adaptive filter within a Transformer architecture to capture diverse frequency components in graphs. We will add this comparison in the related work section. >**Question #2** How to understand the benefits of the higher-order filter base? Higher-order filter bases can result in the following two benefits: (1) enrich the spectral representation through additional frequency components, and (2) strengthen the approximation power. These advantages facilitate more accurate signal distribution modeling in diverse graphs of various complexities. Our method leverages these advantages through an adaptive order selection mechanism that dynamically optimizes filter orders for datasets with varying properties, achieving large performance improvement over existing SOTA filters, e.g., achieving 21\% improvement over the baseline PolyFormer with fixed-order filter bases on the Squirrel dataset. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. After reading comments of other reviewers as well as the corresponding responses, I’d like to keep my score. I hope you can add the above results and discussions in the revised version. --- Reply to Comment 1.1.1: Comment: We truly appreciate your timely and valuable feedback on our paper. Your thoughtful comments are extremely helpful for enhancing our work. We will carefully prepare the final version of our paper based on the experimental results and discussions provided in the rebuttal.
Summary: The paper introduces GrokFormer, a novel Graph Transformer (GT) model that addresses limitations in existing graph learning methods, particularly in capturing diverse frequency signals in graph data. GrokFormer incorporates a Graph Fourier Kolmogorov-Arnold Network (KAN) to design spectral filters that are both spectrum- and order-adaptive. Unlike prior models with fixed or limited adaptability, GrokFormer uses Fourier series-based learnable activation functions to flexibly model graph Laplacian spectra across multiple orders. The model demonstrates superior expressiveness and adaptability compared to state-of-the-art (SOTA) Graph Neural Networks (GNNs) and GTs. Extensive experiments on node classification and graph classification tasks across various datasets validate its effectiveness, showing consistent improvements over baselines. Claims And Evidence: The paper makes several claims: Expressiveness of GrokFormer Filters: The authors claim that GrokFormer filters are more expressive than existing spectral methods due to their spectrum- and order-adaptive capabilities. This is supported by theoretical proofs (e.g., Propositions 4.1–4.4) and empirical results on synthetic datasets, where GrokFormer outperforms others in fitting complex filters like comb and low-comb patterns. Superiority in Real-World Tasks: GrokFormer achieves state-of-the-art performance on 10 node classification datasets and 5 graph classification datasets, outperforming both GNNs and GTs. For example, it achieves the highest accuracy on heterophilic datasets like Squirrel, where complex frequency responses are crucial. Efficiency: The model is computationally efficient compared to alternatives like Specformer, as demonstrated by empirical time and memory complexity analyses. Methods And Evaluation Criteria: The methods proposed align well with the problem of capturing diverse frequency signals in graphs: The use of Fourier series for filter modeling ensures flexibility across spectral frequencies. The evaluation includes a wide range of datasets with varying properties (homophilic and heterophilic), ensuring robustness. Baselines include both spatial-based GNNs (e.g., GCN, GAT) and spectral-based GTs (e.g., Specformer), providing a comprehensive comparison. Theoretical Claims: The theoretical claims are supported by detailed proofs: Proposition 4.1 shows the learnability of GrokFormer filters in both polynomial order and spectrum. Proposition 4.2 demonstrates that polynomial filters used in existing methods are special cases of GrokFormer filters. Proposition 4.3 establishes that Specformer is a simplified variant of GrokFormer. Proposition 4.4 proves the universal approximation capability of GrokFormer filters using Fourier series. Experimental Designs Or Analyses: The experimental design is sound: Experiments cover diverse datasets with varying scales, homophily levels, and graph properties. Ablation studies isolate the contributions of key components (e.g., self-attention vs. Graph Fourier KAN). Scalability tests demonstrate the model's efficiency on large graphs. However, further exploration of hyperparameter sensitivity could enhance the robustness of conclusions. Supplementary Material: The supplementary material includes detailed dataset statistics, experimental setups, additional visualizations of learned filters, and proofs for theoretical claims. These materials provide valuable context for understanding the results. Relation To Broader Scientific Literature: The paper builds on prior work in GNNs (e.g., ChebyNet, GPRGNN) and GTs (e.g., Specformer). It advances the field by addressing limitations in spectrum adaptability (Specformer) and order adaptability (polynomial GNNs). The use of Fourier series aligns with established theories in approximation and signal processing. Essential References Not Discussed: The paper appears comprehensive in its citations but could benefit from discussing recent advancements in scalable spectral methods or alternative transformer architectures that address similar challenges. Other Strengths And Weaknesses: Strengths: Original combination of Fourier series modeling with Transformer architecture. Comprehensive theoretical analysis supporting empirical findings. Strong performance across diverse datasets. Weaknesses: Limited discussion on potential limitations or failure cases. Scalability could be further improved by avoiding spectral decomposition altogether. Other Comments Or Suggestions: Clarify how hyperparameters like K (order) affect performance across different datasets. Include runtime comparisons on larger-scale graphs beyond Penn94 for broader scalability insights. Questions For Authors: How does GrokFormer handle dynamic graphs where the topology changes over time? Can the model's efficiency be improved further by avoiding spectral decomposition entirely? How sensitive is the performance to hyperparameters like K or the number of Fourier terms (M)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive and positive comments on methodology design and theoretical analysis. Please see our response to your comments one by one below. > **Weakness #1** Limited discussion on potential limitations or failure cases. Thank you very much for the suggestion and question. We will provide a more comprehensive discussion of GrokFormer's potential limitations. One limitation is that when learning optimal spectral filters that requires the complete spectrum, the full spectral decomposition may lead to a huge computational cost on very large datasets, but this limitation do not affect the detection effectiveness of GrokFormer on a variety of real-world graph datasets. Please also find another limitation in our response to Reviewer axD1's Weakness #1 and #2, where we discuss that our spectral graph filter may miss some spatial structure information in some datasets. > **Weakness #2, Question #2, and Suggestion #2** Can the model's efficiency be improved further by avoiding spectral decomposition entirely? Include runtime comparisons on larger-scale graphs beyond Penn94 for broader scalability insights. Thank you for your insightful suggestion. Avoiding spectral decomposition entirely can further enhance the model's efficiency. To reduce computational costs, we can employ truncated spectral decomposition to compute only the top-$p$ eigenvalues in large-scale graphs. As shown in Table A1, we provide runtime comparisons on the arXiv dataset, which contains 169,343 nodes (larger than Penn94). By considering only the smallest 5,000 eigenvalues, our model remains computationally efficient even on large-scale datasets. ``` Table A1. The training cost in terms of GPU memory (MB) and running time (in seconds) on arXiv. ``` | |Memory (MB)|Times (s)| | --- | --- |---| |PolyFormer|4019| 296.53 | |Specformer|5565 |6.58| |GrokFormer|5491|6.37| > **Suggestion #1 and Questions #3** Clarify how hyperparameters like $K$ (order) affect performance across different datasets. How sensitive is the performance to hyperparameters like $K$ or the number of Fourier terms ($M$)? Thank you for the valuable comments. In our model, both filter order $K \in [1,6]$ and Fourier terms $M \in [32,64]$ can empower the filter to include richer frequency components and enhance the fitting capability of filter. From the datasets description in Table 6 and the experimental results regarding the filter order $K$ in Figures 3 and 7, we can observe that (1) A smaller $K$ value is suitable for homophilic datasets and small-scale heterophilic datasets with sparse edges; (2) A larger $K$ value is suitable for large-scale heterophilic datasets and heterophilic datasets with dense edges. Similar observations can be obtained for Fourier terms ($M$), too. > **Questions #1** How does GrokFormer handle dynamic graphs where the topology changes over time? Like most competing methods, GrokFormer is designed for static graphs. Dynamic graphs, where the topology evolves over time, are not applicable in the current setting of these methods. We plan to extend GrokFormer to dynamic graphs in our future work.
Summary: This paper proposes GrokFormer, a novel graph transformer (GT), with superior capability in modelling complex spectral filters. The filter design is both order and spectrum adaptive and is implemented using a specific instantiation of Kolmogorov-Arnold Network in the spectral domain. Results on several node and graph level tasks demonstrate its effectiveness. The visualizations show the expressive modelling of the designed filters. Specifically, comparison with Specformer for modelling complex, comb-like frequency patterns shows the superiority of this novel parameterization of the filter. **Update**: I have carefully read the other reviews and the authors' rebuttals. Overall, the new results presented during the rebuttal period improve the paper greatly. Please revise the paper by including these new results and discussions if accepted. I am happy to raise the score to 4 to indicate an accept recommendation. Claims And Evidence: Theoretical claims: (proofs are in Appendix C) 1) $h(\lambda)$ is learnable in both polynomial order and graph spectrum. 2) Existing spectral GNNs' filters can be represented by this approach. 3) The graph filter in Specformer is a simplified variant of proposed filter 4) $h(\lambda)$ can approximate any continuous function and constructs a permutation-equivariant spectral graph convolution (similar proof as in Specformer). Empirical claims: 1) Better benchmarking results on both real and synthetic data: Table 2-4 2) More flexible modelling of the frequency spectrum: Figures 1, 5 Methods And Evaluation Criteria: The numerical evaluation is generally comprehensive (specific questions/comments in later sections). In addition to standard transductive node classification benchmarks (both homophilic and heterophilic graphs), several small-scale TU benchmarks for graph classification is also considered. Experiment on the synthetic dataset in Section 5.3 (also studied in Specformer) also shows the impressive modelling capability of GrocFormer. Theoretical Claims: The proofs are written clearly, relatively simple to follow and are correct to the best of of my knowledge. Experimental Designs Or Analyses: Please see the comments on the experimental evaluation below. Supplementary Material: The supplementary contains details of the datasets, experimental setup, hyperparameters, visualization of the filters freq. response, and the proofs of the propositions. I have carefully checked the details. Relation To Broader Scientific Literature: This work improves upon Specformer by proposing a more flexible KAN-based parameterization of the filter in the spectral domain, which has superior capability of modelling complicated spectral characteristics. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1) The paper is well-written, easy to read. The visualizations on both synthetic and real data are intriguing. 2) The experiments are thorough and substantiates the main idea clearly for the chosen benchmarks. Weaknesses: 1) There is no experiments on the popular Benchmarking GNN datasets [1], which are the de-facto benchmarks in recent graph transformers, e.g., GRIT [2]. It is well-known that those benchmarks are more difficult and requires higher expressivity compared to the datasets considered in this paper. For example, the node classification is experimentally studied entirely in the transductive setting in this work, whereas [1] contains inductive node classification benchmarks. 2) Similarly, it would strengthen the paper if the authors include some results on the long range graph benchmark (LRGB) [3] datasets. 3) It would be helpful to characterize the theoretical expressivity of GrocFormer in terms of GD-WL [4], and compare and contrast it with existing GTs, e.g. GRIT [2]. 4) While the authors cite (Xu et al., 2024) for motivating the specific design of the filter in eq. 5, it would be more convincing if some numerical results are presented (even on the synthetic dataset) to showcase why this specific parameterization is chosen. What other alternatives were considered? How were those results compared to this version of the filter? References: [1] Dwivedi, V. P., Joshi, C. K., Laurent, T., Bengio, Y., and Bresson, X. Benchmarking Graph Neural Networks. J. Mach. Learn. Res., December 2022a. [2] Ma, L., Lin, C., Lim, D., Romero-Soriano, A., K. Dokania, P., Coates, M., H.S. Torr, P., and Lim, S.-N. Graph Inductive Biases in Transformers without Message Passing. In Proc. Int. Conf. Mach. Learn., 2023. [3] Dwivedi, V. P., Rampa´sek, L., Galkin, M., Parviz, A., Wolf, ˇG., Luu, A. T., and Beaini, D. Long Range Graph Benchmark. In Adv. Neural Inf. Process. Syst., December 2022b. [4] Zhang, B., Luo, S., Wang, L., and He, D. Rethinking the Expressive Power of GNNs via Graph Biconnectivity. In Proc. Int. Conf. Learn. Represent., 2023. Other Comments Or Suggestions: N/A Questions For Authors: Please address the questions/comments in the preceding sections. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive and positive comments on the presentation, visualization, and experiment design. Please see our response to your comments one by one below. >**Weaknesses #1 and #2** Lack of a few challenging benchmark GNN datasets (datasets in [Ref1] and LRGB datasets [Ref2]) and two recent baseline methods (GraphGRS [Ref3] and GRIT [Ref4]). Following your suggestion, we conducted the experiments on the two representative datasets, WikiCS and Peptides-func, obtained from [Ref1] and [Ref2] respectively, to further demonstrate the expressiveness of GrokFormer. Additionally, we also added the two new expressive GTs, GraphGPS and GRIT, as our competing methods. The experimental results are shown in Tables A1 and A2. ``` Table A1. Classification accuracy (in percent) on the WikiCS and Test AP on Peptides-func. ``` | |WikiCS|Peptides-func| | --- | --- |---| |Specformer|82.37|0.5771| |Polyformer|80.19|0.5319| |GraphGPS|78.66|0.6257| |GRIT|77.21|0.6458| |GrokFormer|83.11|0.5987| ``` Table A2. Classification accuracy comparison of GRIT, GraphGPS, and GrokFormer. ``` ||Pubmed|Squirrel|Actor| | --- | --- |---|---| |GraphGPS|86.94|35.58|36.01| |GRIT|87.62|37.68|37.51| |GrokFormer|91.39|65.12|42.98| The experimental results show that GrokFormer consistently outperforms all competing methods, including the newly added expressive GTs, in Table A2 and on WikiCS of Table A1. This is primarily because expressive GTs like GraphGPS and GRIT struggle to capture the underlying label patterns due to the Transformer's low-pass nature, resulting in suboptimal performance. Note that the spectral-based methods slightly underperform the expressive GTs that are specifically designed to capture the major spatial structure information in Peptides-func, which is not modeled by these general spectral methods. Thus, we believe that this suboptimal performance does not diminish our contribution in the proposed expressive spectral graph filters, considering its ability in enhancing the Transformer's capability to capture frequency signals beyond low-pass on graphs. - [Ref1] Benchmarking Graph Neural Networks. JMLR, 2022. - [Ref2] Long Range Graph Benchmark. NeurIPS, 2022. - [Ref3] Recipe for a general, powerful, scalable graph transformer. NeurIPS, 2022. - [Ref4] Graph Inductive Biases in Transformers without Message Passing. ICML, 2023. >**Weakness #3** The theoretical expressivity discussion in terms of WL, and compare and contrast it with existing GTs, e.g. GRIT. We would like to clarify that the spectral graph filter designed in GrokFormer is not based on the spatial-domain WL test. The GrokFormer is a spectral GT operating in the spectral domain while 1-WL or GD-WL algorithms are essentially spatial methods based on the graph structure. Therefore, GRIT exhibits better localization spatially, while our GrokFormer achieves better localization on frequencies. Although our work lacks a formal theoretical analysis in the spatial domain to discuss strong expressivity like the WL test, we provide theoretical analysis in the spectral domain to justify its advantages over existing advanced filters and show that our filter possesses better universality and flexibility. Moreover, the filter fitting experiment on synthetic datasets and comprehensive empirical comparison on real-world datasets demonstrate the stronger expressivity of our GrokFormer filter. >**Weakness #4** Lack of numerical results for the filter design in Eq. (5). What other alternatives were considered? How were those results compared to this version of the filter? We would like to clarify that we have pre-defined six different filters on the synthetic dataset in Section 5.3.1 to verify the fitting capability of the filter design in Eq.(5) compared to alternative methods. Here we present partial results in Table A3 as follows, ``` Table A3. Node regression results on two difficult synthetic datasets. ``` | |High-pass|Comb| | --- | --- |---| |GPRGNN|0.1046(.9985)|4.9416(.9283) | |Specformer|0.0029(.9999) |0.0083(.9998)| |PolyFormer|0.0482(0.9993)|0.2868(0.9945)| |GrokFormer|0.0012(.9999)|0.0021(.9999)| Note that we use two metrics to evaluate each method: sum of squared error ($R^2$ score). Lower SSE (higher $R^2$) indicates better performance. The effectiveness of joint filter order-adaptive and spectrum-adaptive properties in Eq.(5) is empirically confirmed through its much better performance on such difficult synthetic datasets. Meanwhile, two simplified alternative cases (a) spectrum-only adaptation without high-order filter terms and (b) filter order-only adaptation are considered in the comparison above. Per Propositions 4.2 and 4.3, the filter variant in case (a) can be regarded as the Specformer filter, while the variant in case (b) corresponds to the GPRGNN filter. The results show that heir fitting ability of the GrokFormer filter is much stronger than the two alternatives. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response, it addresses most of my comments. I have only one remaining question. **numerical results for the filter design in Eq. (5). What other alternatives were considered? How were those results compared to this version of the filter?** I have read and understood the effectiveness of proposed method in comparison to the baselines for the synthetic dataset experiment in Section 5.3.1. My question was specifically about the chosen parameterization in Eq. 5? What is the specific motivation of using sine and cosine instead of spline? Were any other alternatives considered? If yes, what's the empirical comparison with that? --- Reply to Comment 1.1.1: Comment: We're very pleased to know that our response has addressed most of your concerns. We really appreciate your further comments, and please find our response to your follow-up question as follows. >**Follow-up question with Question #1 #** What is the specific motivation of using sine and cosine instead of spline in the chosen parameterization in Eq. (5)? The motivation for choosing sine and cosine functions is that they can be viewed as components of a Fourier series, allowing the filter to approximate any continuous function over the eigenvalue range [0,2]; furthermore, this setting helps retain orthogonality in Fourier series, supporting the learning of diverse complementary frequency components within a specific interval. >**Follow-up question with Question #2 #** Any other alternatives considered and the empirical comparison? As commonly used for filter functions, exponential functions (e.g., $e^{-x}$) can be considered as an alternative to sine and cosine functions, such as GraphHeat [Ref1]. Since $e^{-x}$ mainly captures low-frequency component, to better captures diverse frequency components (e.g., low, high, and band-pass), we combine it with a trigonometric function (e.g., cosine) to derive an alternative filter function $h(\lambda)$=$\sum_{k=1}^{K}\sum_{m=0}^{M}$($a_{km}$*cos($m\lambda^k$)+$b_{km}$*$e^{-\lambda^k}$). We refer to this variant as GrokFormer$^*$ below. We compared GrokFormer$^*$ with GrokFormer on synthetic datasets in Table A1 and real-world datasets in Table A2 to evaluate its learning capability. ``` Table A1. Node regression results on synthetic datasets. ``` ||Low-pass|High-pass|Band-pass|Band-rejection|Comb|Low-comb| |--- | --- |---|--- | --- |---|---| |GrokFormer$^{*}$|0.0085(.9999)|0.0073(.9999)|0.0102(.9999)|0.0166(.9999)|0.0205(.9998)|0.0435(.9998)| |GrokFormer|**0.0011(.9999)**|**0.0012(.9999)**|**0.0004(.9999)**|**0.0024(.9999)**|**0.0021(.9999)**|**0.0029(.9999)**| ``` Table A2. Node classification results on the real-world datasets. ``` ||Cora|Citeseer|Pubmed|Squirrel|Chameleon|Actor|WikiCS| |--- | --- |---|--- | --- |---|---|---| |GrokFormer$^{*}$|88.69|80.71|90.29|63.98|72.16|41.67|82.54| |GrokFormer|**89.57**|**81.92**|**91.39**|**65.12**|**75.58**|**42.98**|**83.11**| Based on the results in Tables A1 and A2, we can observe that GrokFormer$^*$ has consistently less effective filter-fitting capability and classification performance than GrokFormer on synthetic and real-world datasets, respectively. In particular, GrokFormer tends to obtain a larger superiority over GrokFormer$^*$ on the heterophilic datasets. This may be due to that, as discussed above, the alternative filter function in GrokFormer$^{*}$ fails to achieve certain advantageous properties of the filter function in Eq. (5), such as orthogonality and its ability of learning more diverse frequency components. We hope the above reply helps address this follow-up question. We will discuss and clarify this point in our final version. We're more than happy to engage in more discussion with you to address any further concerns you may have. Thank you very much for helping enhance our paper again! - [Ref1] Xu, B., Shen, H., et al. Graph Convolutional Networks using Heat Kernel for Semi-supervised Learning. IJCAI, 2020.
null
null
null
null
null
null
Temporally Sparse Attack for Fooling Large Language Models in Time Series Forecasting
Reject
Summary: The authors propose a black-box attack for LLM time-series forecasters. They use sparse perturbations, i.e., only a subset of training time steps will be perturbed to fool the model prediction. Additionally, due to the unknown true label, they use the model prediction. Claims And Evidence: The authors claim that an imperceptible sparse attack modifying just 10% of the input can significantly degrade forecasting performance. The evidence the authors provide: degrade forecasting: - (+) the proposed TSA outperforms an attack with gaussian white noise on 4 datasets - (+) the authors show forecasts of the models under attack (no, gwn, tsa) - (-) the authors do not compare against other techniques (DGA, see related work) imperceptibility: - (+) input distribution does not significantly change according to Figure 3a) - (-) the authors don't really investigate the effect of the attack deeply: there is no relation of the \eps to the 2% of mean of the time series that is used for GWN (as far as I see it) so it is tough for the reader to estimate whether an individual perturbation might be exceeding the gaussian noise drastically. - (-) no plot of the manipulated training set. As only 9 points are changed it would be interesting to see 1) where they are located 2) how the perturbed points look like and some statistics. Methods And Evaluation Criteria: Porposed methods and evaluation criteria make sense. Theoretical Claims: None Experimental Designs Or Analyses: I like the experiments. Also nice to see the hyperparameter influence experiment. Supplementary Material: I looked at the code. Not sure if this is everything that is needed to repeat all the experiments though. Only two datasets and not all models given. Relation To Broader Scientific Literature: There is quite a huge red flag for me: the relation to **Liu, F. et al. Adversarial vulnerabilities in large language models for time series forecasting in AISTATS 2025** This paper tackles the exact same scenario, datasets, has basically the same layout and insights. It also uses a black-box, gradient-free attack where the gradient estimation is very similar to Eq. 6 & 7. To me the only difference in the current paper seems to be the sparsity which is just a greedy selection of the \tau elements. Essential References Not Discussed: See above Other Strengths And Weaknesses: More insights (quantitatively) on the attack would be interesting. Due to the currently very strong overlap to related work I would say the contribution of the sparsity (greedy selection) is not enough. But I would be happy to adjust if I am misunderstanding something. Other Comments Or Suggestions: - Questions For Authors: What is the main contribution over Liu et al. 2024b? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks sincerely for your time and review. > Your **major concern** is **the relation to [1] Liu, F. et al. Adversarial vulnerabilities in large language models for time series forecasting in AISTATS 2025.** 1. Liu, F. et al began exploring the vulnerabilities of LLMs in time series forecasting; 2. This submission is a follow-up study that addresses a key gap in [1], where the proposed full-series attack required manipulating the entire input sequence. Such an approach may be impractical in real-world scenarios, as it would force the attacker to interact with the data collector continuously over extended periods (e.g., hours), reducing stealth and increasing the risk of detection; 3. This work proposes a temporally sparse attack method that poisons only 10% of the input, effectively manipulating LLM-based forecasters while maintaining high imperceptibility. Considering attacking LLMs in time series forecasting as a broader research thesis, we view [1] as chapter one and this submission as chapter two. Specifically, we believe it is an accepted practice in the research community to apply the same datasets, layout, scenarios, and experimental settings across a series of studies that address a shared foundational topic. > The **second major concern** is **whether the contribution of temporal sparsity is enough**. 1. From a practical perspective, temporally sparse attack is more realistic and imperceptible. E.g., in a 5-min traffic prediction scenario, a full-series attacker aiming to manipulate the traffic system would need to interact with the model over hours, which violates a key principle of adversarial attacks: stealth. The use of temporal sparsity is particularly meaningful as it enables the attacker to determine when to launch an attack. 2. From the mathematical and technical perspective, we introduce CCOP as a framework to model the attack process, contributing a novel angle to AI safety research. Though the change in the mathematical formulation may seem minor, the constraint $||w||_0 = \tau$ rather than $||w||_1 < \tau$ or $||w||_0 < \tau$ transforms the problem into a **non-convex and NP-hard** challenge. While [1] also explored a black-box and ground-truth-inaccessible setting, it was formulated as a convex problem. Our adaptation of Subspace Pursuit, a non-convex optimization algorithm originally designed for white-box Lasso, to the black-box, label-free setting represents a key technical contribution. (Eq.6 & 7 are a marginal modification from [1] to link SP with black-box, label-free settings) Besides, ***1.the authors do not compare against DGA[1].*** **TSA and DGA were compared in Section 6.6. Attack on Defended LLM-Based Forecasting Models** to evaluate whether TSA can bypass filtering-based defenses. Compared to [1], the TSA proved more challenging for filtering techniques to correct, as recorded in Figure 4. ***2.there is no relation of the $\epsilon$ to the 2% of the mean of the time series that is used for GWN.*** We clarify that **both TSA and GWN share the same definition of $\epsilon$**. Because TSA perturbs only 10% of the time series, whereas GWN perturbs the entire series, we set $\epsilon = 0.1$ for TSA (larger than 0.02 used for GWN). **In Figure 2(b) and 2(d), the left two-thirds visualize GWN and TSA**, where the purple shaded area denotes GWN and the orange shaded area represents TSA. ***3.no plot of the manipulated training set.*** **These were plotted in Figure 2.** E.g., in subplot (d), a weather prediction by TimeGPT from March 21 00:00 to March 21 16:00 (input) to forecast the weather until March 22, 00:00 (output), the orange-shaded regions in the input range indicated the time steps attacked by TSA. The height of the orange shadow represented the magnitude of the perturbations added. ***4.Not sure if this is everything that is needed to repeat all the experiments though.*** Due to size limitations, we included two datasets and one API-based forecasting model to demonstrate the effectiveness of TSA. **We will release the complete code and all datasets** to ensure full reproducibility. In summary, this submission is a follow-up to prior work [1] exploring a new topic on adversarial attacking LLMs in time series forecasting. We identify a practical gap in [1], where the full-series attack lacks imperceptibility, and apply CCOP, a novel mathematical tool, to model the attack process. To solve this non-convex and NP-hard problem, we adapt SP to the black-box, label-free setting corresponding to attacking LLMs, by incorporating gradient-free optimization. TSA is shown to be more stealthy and more effective at bypassing filter-based adversarial defenses than [1]. If the concern about the overlap to [1] is addressed, would you like to increase your score greatly? To be honest, the current score is very harmful to the thesis of exploring LLMs' vulnerabilities in time series forecasting. Your response will be sincerely appreciated. --- Rebuttal Comment 1.1: Comment: I thank the authors for their explanation. minor comments: 1. Thanks for pointing me to the comparison. It could be made more clear by actually using the name in the figures etc. sparse vs non-sparse can be easily overlooked that you e.g. use GWN as non-sparse again. 2. Thanks for clarifying. It was not clear from reading the paper but this makes me trust the experiments more. 3. To Figure 2): it is not specified where the input sequence ends so this can be improved. 4. Thanks. major concerns: 1) I think this paper does not really showcase well where its contribution begins. Threat model etc. and how to obtain the gradient never really mention the very closely related work that defines and proposes many of the things already. (I see that some parts of e.g. the threat model are unique though) I think a restructuring of this work and properly disentangling the related work more improve the paper a lot. 2) Thanks for the clarification. I raise my score now based on the above clarification, read the two papers and the other reviews and reconsider the score again. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for your response. Reviewer r2iK expressed concerns regarding the **distinction between this submission and [1]** Liu, F. et al. Adversarial Vulnerabilities in Large Language Models for Time Series Forecasting in AISTATS 2025. We would like to further clarify their difference. While [1] investigates the vulnerabilities of LLMs in time series forecasting, our submission focuses on practical attack strategies. Specifically, 1. **Gap in [1] was discussed and a new question was raised** While [1] explores attacks that perturb the entire input time series, we argue that such approaches are **impractical** and **violate the principle of imperceptibility**, particularly in real-world scenarios where an attacker would need to manipulate a long sequence over hours—rendering the attack unrealistic for time-sensitive applications. This observation led us to pose a new and more practical research question: > **Can an attacker fool LLMs in time series forecasting by perturbing only a small fraction of the input time series?** 2. **Temporally sparse attack was proposed** This submission proposes a temporally sparse attack that perturbs only 10% of the input time series while maintaining attack effectiveness. 3. **A new mathematical tool, CCOP, was introduced to model the sparse attack process** - Section 4. Threat Model outlines the attacker’s capabilities and the overall attack process. Our threat model retains the **black-box** and **label-free** assumptions from [1], which are essential for attacking LLMs in time series forecasting. In addition, we impose a new constraint by limiting the attacker’s ability to perturb only a **temporally sparse subset** of the input series—making the attack more realistic and stealthy. - To formally model this process, we introduce a **Cardinality-Constrained Optimization Problem (CCOP)**—a non-convex and NP-hard formulation. To the best of our knowledge, this is the **first application of CCOP** in the context of adversarial attacks, providing a principled foundation for studying sparse attack strategies. 4. **A non-convex optimization algorithm was extended** We adapted **Subspace Pursuit (SP)**—an approximate solver originally designed for white-box LASSO problems—to suit **black-box and label-free** conditions. - In Section 5.1, we proposed a method to perturb a single time step with zeroth optimization (modified from [1]), while [1] was to manipulate the entire series. - While this section shared some mathematical similarities with [1], this technique served only as a subroutine for the modified SP in Section 5, which handles temporally sparse attacks by solving CCOP, a non-convex and NP-hard optimization problem. To the best of our knowledge, these contributions are **orthogonal** to [1] and address a **practical gap** that has not been sufficiently explored. We believe this clarifies the novelty and significance of our work. --- ### In Summary This submission makes the following novel contributions: 1. It addressed a **new research question**. 2. It introduced **CCOP** to model adversarial attacks under temporal sparsity—a novel and mathematically grounded formulation. 3. It **modified Subspace Pursuit** from $\tau$-sparse white-box LASSO to work under black-box and label-free conditions for time series attacks against LLMs. 4. It empirically demonstrated that temporally sparse attacks are **not only more stealthy but also more effective in bypassing filter-based adversarial defenses** than [1]. We sincerely appreciate the opportunity to re-clarify the distinction between this submission and [1], as well as to recognize the advancements it makes beyond prior work. Given the growing interest in using LLMs as foundation models for time series forecasting, assessing their **vulnerability** and **reliability** is both a critical and timely research direction. We believe this submission makes a **significant contribution** to advancing the understanding and robustness of LLMs in this emerging domain.
Summary: This paper proposes Temporally Sparse Attack (TSA), an adversarial attack on LLM-based time series forecasting. Unlike existing methods that modify the entire input, TSA perturbs only a small fraction of time steps, significantly degrading forecasting accuracy. The attack is formulated as a Cardinality-Constrained Optimization Problem (CCOP) and solved using a Subspace Pursuit (SP)-based method. Experiments on LLMTime, TimeGPT, and TimeLLM show that modifying just 10% of inputs causes severe performance drops, revealing vulnerabilities in LLM-based forecasting. Claims And Evidence: The claims are experimentally supported, but robustness against adaptive attacks remains unclear. Methods And Evaluation Criteria: The approach follows standard adversarial attack frameworks and is evaluated on multiple LLM-based forecasters. Theoretical Claims: The formulation is mathematically sound, but proofs on attack effectiveness could be more detailed. Experimental Designs Or Analyses: Experiments cover diverse datasets and models. Supplementary Material: No supplementary materials available. Relation To Broader Scientific Literature: Extends adversarial attack research to LLM-based time series forecasting. Essential References Not Discussed: Some recent defenses against sparse attacks are missing in the discussion. Other Strengths And Weaknesses: Strengths: Sparse attack strategy minimizes the number of modified time steps, making detection difficult. Black-box setting ensures broad applicability without requiring model knowledge. Works across various LLM-based forecasting models, proving its generalizability. Weaknesses: Limited evaluation on real-world applications, such as financial or medical forecasting. No discussion on adaptive adversarial defenses, which might mitigate TSA’s impact. Computational overhead of the Subspace Pursuit optimization is not analyzed. Other Comments Or Suggestions: Provide efficiency analysis of TSA, particularly the runtime impact of Subspace Pursuit. Compare TSA with gradient-based attacks to show its advantages beyond sparsity. Evaluate effectiveness under real-world conditions, such as noisy time series data. Questions For Authors: 1.Can TSA bypass adaptive defenses? Have you tested whether filtering-based or adversarial training defenses can counteract TSA? 2.How efficient is the optimization? What is the computational overhead of the SP-based optimization, and how does it scale with input size? 3.Can TSA work in practical forecasting applications? Have you considered evaluating TSA in domains like finance or healthcare, where adversarial robustness is critical? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for taking the time to review our paper. Your **major concerns** are: 1. Evaluation on real-world applications, such as financial or medical forecasting. 2. Discussion on adaptive adversarial defenses that could mitigate TSA’s impact. 3. Analysis of the computational overhead of the Subspace Pursuit method. We fully agree that these aspects are important for a comprehensive adversarial study. However, we would like to clarify that **real-world evaluations, defense bypassing tests, and computational overhead analysis were all included in the original submission**. We want to emphasize that our intention is not to criticize any oversight. Your comments are valuable and insightful, helping to improve the clarity and impact of our study. > *Can TSA bypass adaptive defenses? Have you tested whether filtering-based or adversarial training defenses can counteract TSA?* **Section 6.6. Attack on Defended LLM-Based Forecasting Models** presented an experiment evaluating whether Temporally Sparse Attack (TSA) can bypass filtering-based defenses. The study included three LLM-based forecasters, three datasets, and three filter-based mitigation methods. Compared to a full-series black-box attack against LLM-based forecasting as the baseline [1], the TSA proved more challenging for filtering techniques to correct, as recorded in Figure 4. [1] Liu, F. et al. Adversarial vulnerabilities in large language models for time series forecasting in AISTATS 2025 **Section 7. Mitigation** first discussed adversarial training defenses were impractical because of the high re-training cost of LLMs. It then explored a potential mitigation strategy where the LLM-based forecaster backcasts historical time series from its own predictions. By comparing the backcasted series with the original inputs, manipulated time steps can be identified. Once detected, Gaussian filtering is applied to correct the poisoned inputs. > *How efficient is the optimization? What is the computational overhead of the SP-based optimization, and how does it scale with input size?* At the end of **Section 5.2**, the computational complexity of the SP-based method was provided as $\mathcal{O}\left(T\times\tau\right)$, whereas a standard greedy search method exhibited significantly higher complexity at $\mathcal{O}\left(T^{\tau}\right)$. **Section 6.7. Hyperparameter Analysis** empirically examined the computational cost as the sparsity level $\tau$ increased. The results, recorded in Figure 5(b), demonstrated that TSA's cost scales linearly with the sparsity level. The input size was 96, a long enough setting in time series forecasting[2][3]. [2] Wang, S. et al. TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting in ICLR 2024. [3] Gruver, N. et al. Large language models are zero-shot time series forecasters in Neurips 2023. > *Can TSA work in practical forecasting applications? Have you considered evaluating TSA in domains like finance or healthcare, where adversarial robustness is critical?* In **Section 6.4. Overall Comparison**, a dataset from the finance domain—real-world foreign exchange rate data—was included in the experiment, along with three practical forecasting applications: traffic, power consumption, and meteorological forecasting. All datasets consist of noisy time series collected from real-world sources, as introduced in **Section 6.1. Datasets**. Since publicly available de-identified healthcare data is not yet large enough to serve as a standard benchmark in forecasting research, it was not included in this study. Besides, you commented: >*Compare TSA with gradient-based attacks to show its advantages beyond sparsity.* **Section 4: Threat Model** provides an in-depth analysis of the attacker's capabilities in manipulating LLM-based time series forecasting. Given that internal information, such as LLM parameters, is difficult to access and ground truth values are unavailable at runtime, the attacker cannot compute gradients, making gradient-based adversarial attacks inapplicable. We sincerely acknowledge the importance of evaluating real-world applications, defense bypassing tests, and computational overhead analysis—all of which were included in the original submission. Your comments have been truly encouraging, as they align with our own standards for a comprehensive adversarial attack study. You may find our initial scores were terrible, and very possible feedback from an experienced researcher in adversarial studies is extremely important for our research. Considering **our problem is not obvious enough rather than missing critical information**, which should be judged as a minor issue, would it be possible for you to increase your final score greatly? Thank you once again for your time, effort, and constructive suggestions! Any additional feedback would be greatly appreciated.
Summary: This paper proposes a Temporally Sparse Attack (TSA) for LLM-based time series forecasting. Previous studies achieve adversarial attacks by modifying the entire time series with perturbations. This paper proposes to modifying a sparse portion of a sequence. Specifically, it formulate the tasks as a cardinality-constrained optimization promblem (CCOP) and proposes a subspace pursuit-based alogrithm to solve the problem. The experimental results on representative LLM-based time series forecasting models demonstrate the effectiveness. ## Update after Rebuttal I thank the authors for the rebuttal. But I'm not convinced on the novelty, and the motivation of the solution. Moreover, I still believe the proposed method should be evaluated on non-LLM time series forecasting models. So I would like to keep my score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Code. Relation To Broader Scientific Literature: The key contributions are related to AI safety, adversarial attacks and time series forecasting. The adversarial attack and defense methods have been well studied in CV and NLP, but limited studies are conducted on time series area, especially for forecasting. Essential References Not Discussed: Some studies on adversarial attacks to time series classification should also be included, as the basic modules of time series encoding are the same for the two tasks. Only the loss functions have some differences. For example, [1] Adversarial Attacks on Time Series, TPAMI 2020 [2] Black-Box Adversarial Attack on Time Series Classification, AAAI 2023 Other Strengths And Weaknesses: Strengths: 1. This paper reveals the vulnerability of LLM-based time series forecasters. 2. The paper is well written and easy to follow. 3. The experiments are conducted on three LLM-based time series forecasting models with different LLMs, which is comprehensive. Weaknesses: 1. From my perspective, the idea of "sparsity constraints" is not novel. Similar ideas have already been proposed in other time series attack studies, e.g., [1] Black-Box Adversarial Attack on Time Series Classification, AAAI 2023 [2] Towards Backdoor Attack on Deep Learning based Time Series Classification 2. The proposed method is a gradient-based optimization with the sparsity constraints. The novelty is not enough. 3. Again, as the sparsity idea has been proposed before, several solutions have been proposed to resolve CCOP (or similar problems). The computational cost of the proposed SP-based algorithm should be analyzed and compared to existing ones. 4. LLM-based time series forecaster and non-LLM ones have no significant differences under the settings of this paper. It would be better to include more non-LLM models for evaluation. 5. Some bar charts are obscured in Figure 4. It should relocate the legends. Other Comments Or Suggestions: Please refer to the above comments. Questions For Authors: 1. Could you provide time cost for the proposed method? 2. If we want to use adversarial training for defense. Can we freeze LLMs and train the reprogramming part in Time-LLM? Thus, the computational cost is acceptable. How about the defense performance? 3. As we know, several adversarial attacks have been proposed for time series classification, could these methods be applied to the forecasting task? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks sincerely for your reviews. > Your **major concern** is: **forecasting attacks vs. classification attacks** in time series. Please allow us to present our conclusion first: **Attacking forecasting and classification are fundamentally different, and attacks for one are not directly applicable to the other.** The key distinction lies in the nature of forecasting as an autoregressive task, where the ground truth is unavailable at runtime. E.g, in a 5-min-ahead stock value prediction, the prediction's ground truth at 10 am is its value at 10:05 am, which remains inaccessible at 10 am to both the forecaster and the attacker. Thus, attacks on forecasting **CANNOT use true labels** to generate adversarial examples, unlike time series classification, which operates on static labeled data. This difference between adversarial attacks in forecasting and static applications was extensively analyzed in prior research, like [1-2]. Because this distinction was considered a known background, it was only briefly discussed in Section 2.2: Attack on Time Series Forecasting. [1]Raphaël, D. et al. Adversarial Attacks on Probabilistic Autoregressive Forecasting Models in ICML 2020 [2]Liu, L. et al. Robust multivariate time-series forecasting: Adversarial attacks and defense mechanisms in ICLR 2023 Your comments provide valuable insight—since both tasks involve time series data, it's beneficial to strengthen the connection between forecasting/classification attacks. In response, we will add a subsection in the camera-ready version to explicitly discuss their differences and connections, **citing [3-5]**. [3]Ding, D. et al. Black-box adversarial attack on time series classification in AAAI 2023 [4]Ding, D. et al. Towards Backdoor Attack on Deep Learning based Time Series Classification in ICDE 2022 [5]Ding, D. et al. Improving the Robustness of Wasserstein Embedding by Adversarial PAC-Bayesian Learning in AAAI 2020 > The **second major concern** is **the novelty of CCOP and the SP algorithm**, mentioning *"Only the loss functions have some differences."* Please allow us to emphasize that while a marginal mathematical difference may seem small, it can lead to significantly different problems, especially in optimization domain. E.g., \begin{array}{ccc} \min||y-wx||,\ ||w||_1<\tau \quad (1) & \min||y-wx||,\ ||w||_0<\tau \quad (2) & \min||y-wx||,\ ||w||_0=\tau \quad (3) \end{array} These three formulations differ by only a single-character change, yet (1) is a convex optimization problem, (2) is a non-convex optimization that can be approximated and enveloped by a convex optimization as (1), **while (3) becomes a challenging non-convex and NP-hard problem**, known as the Cardinality-Constrained Optimization Problem (CCOP) in optimization and operation programming research. This is the first use of CCOP, a new mathematical tool, to model the attack process by introducing a $\tau$-sparse temporal constraint and guiding the timing of the attack. It is important to note that **existing attacks focus on convex optimization, and their gradient-based solutions do not apply to solving CCOP.** Optimization researchers found that Orthogonal Matching Pursuit (OMP) and Subspace Pursuit (SP) can provide approximate solutions to CCOP, formulated as (3), within polynomial time. Applying them to this study raised two key challenges: **unavailability of LLM parameters (black-box setting)** and **unavailability of true labels y**. We incorporated gradient-free optimization and adapted SP, forming our key technical contribution. It is important to highlight that **the final algorithm is a gradient-free solution for a black-box, non-convex, and NP-hard problem.** For **minor concerns**: 1. *It would be better to include more non-LLM models for evaluation.* The full-series attack, Liu, F. et al. Adversarial vulnerabilities in large language models for time series forecasting in AISTATS 2025, provided a similar comparison. 2. *Provide time cost for the proposed method.* The **computational complexity** was provided as $\mathcal{O}\left(T\times\tau\right)$ in Section 5.2. The **computational cost experiment** was in Section 6.7. Hyperparameter Analysis. 3. *Can we freeze LLMs and adversarially train the reprogramming part in Time-LLM?* This is a valuable question for understanding LLMs' vulnerabilities in time series. External components in Time-LLM primarily involve prompt encoding. Then, your question touches on a fundamental and meaningful direction: "Can prompt engineering enhance the robustness of LLMs in time series forecasting?" Addressing this would require a comprehensive body of future work. We deeply value your emphasis on connecting and distinguishing classification/forecasting attacks. We plan to add a dedicated subsection for this emphasis. As you may have noticed, our initial scores were quite low, and we would be sincerely grateful if you would consider significantly improving your final score.
Summary: The paper presents Temporally Sparse Attack(TSA), a novel adversarial attack method for LLM-based time series forecasting models that requires manipulating only a small subset of input time steps. The authors formulate the problem as a CCOP and develop an SP-based algorithm to generate these sparse perturbations. Experiments show that modifying just 10% of input time steps can significantly degrade model performance across various LLM-based forecasters and datasets. The authors also show that conventional filter-based defenses fail to mitigate these sparse attacks, highlighting a critical vulnerability in current LLM-based forecasting systems. Claims And Evidence: The paper claims that: 1. LLM-based forecasters are highly vulnerable to sparse perturbations (modifying just 10% of inputs) 2. The proposed TSA outperforms Gaussian White Noise baselines 3. Standard filter-based defenses fail against these sparse attacks These claims are substantiated through comprehensive experiments across multiple models and datasets, with results clearly presented in tables and figures. Methods And Evaluation Criteria: The method is technically sound, formulating the attack as a CCOP and developing an SP-based algorithm that efficiently identifies critical time steps to perturb. Given the constraints of attacking LLMs, the zero optimization technique for black-box gradient estimation is appropriate. Theoretical Claims: This paper has no theoretical claim. Experimental Designs Or Analyses: The comprehensive experiments cover multiple LLM-based forecasters and one non-LLM baseline across diverse datasets. The comparative analysis against GWN baselines effectively demonstrates TSA's superiority. The study of various filter-based defenses provides valuable insights into the current vulnerability of these systems. The hyperparameter analysis in Figure 5 is informative, though it could be expanded to include more sparsity levels better to understand the trade-off between sparsity and attack effectiveness. Supplementary Material: I have not checked the code the author provided. Relation To Broader Scientific Literature: Liu et al. (2024b) proposed targeted gradient-free optimization-based attacks for black-box settings but still required modifying the complete time series. TSA introduces the critical insight that carefully selecting specific time steps for modification can be more effective than perturbing the entire sequence, addressing a key practical limitation of previous attack methods. Essential References Not Discussed: All essential references are discussed. Other Strengths And Weaknesses: Strengths: 1. Addresses an unexplored vulnerability in LLM-based forecasting. 2. Comprehensive experiments across multiple models and datasets. Weaknesses: 1. Limited intuition about why specific time steps are more vulnerable than others. 2. The proposed autocorrelation-based detection method is mentioned but not thoroughly evaluated. Other Comments Or Suggestions: Since the experimental result does not improve much, why this problem is essential is unclear. Questions For Authors: 1. There are many methods that solve the optimization problem. Why choose Subspace Pursuit (SP)? 2. Is there a pattern to which time steps your algorithm selects for perturbation? Do they share specific characteristics across datasets? 3. How does the effectiveness of TSA vary with different sparsity levels? 4. Why can additive perturbation be replaced with multiplicative perturbation? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you sincerely for dedicating your time to reviewing this paper. To respond to your comments with the respect they deserve, we have carefully addressed each of your points individually. > *There are many methods that solve the optimization problem. Why choose Subspace Pursuit (SP)?* CCOP is a non-convex and NP-hard optimization problem, distinguishing it from most existing adversarial studies that primarily focus on convex optimization. Over decades, researchers on optimization and operation programming proposed three approaches: greedy search, Orthogonal Matching Pursuit (OMP), and Subspace Pursuit (SP). Greedy search suffers from poor scalability with a computational complexity of $\mathcal{O}(T^{\tau})$, while OMP and SP can provide approximate solutions in polynomial time ($\mathcal{O}(T\times\tau)$)[1–2]. However, these two methods were originally developed for white-box cardinality-constrained Lasso problems. Applying them to attacks on LLMs in time series forecasting introduces two key challenges: **the unavailability of LLM parameters (black-box setting)** and **the absence of ground truth labels**. In this work, we adapted SP to the black-box, label-free setting through the incorporation of a gradient-free optimization technique. [1]Dai, W. and Milenkovic, O. Subspace pursuit for compressive sensing signal reconstruction in IEEE Trans on Information Theory 2009. [2]Byonghyo, S. et al., Generalized orthogonal matching pursuit in IEEE Transactions on Signal Processing 2012. > *Is there a pattern to which time steps your algorithm selects for perturbation? Do they share specific characteristics across datasets?* This is a significant question that contributes to understanding LLMs' vulnerabilities in time series forecasting. A common observation is that steps to attack often fall within the latest 25% of the input time series. The exact patterns can vary across time, datasets, and models. E.g., in weather prediction on March 21 by TimeGPT, steps [72, 78, 79, 83, 89, 91, 93, 94, 95] are attacked, while in power consumption prediction on Oct 24-28 by LLMTime w/ GPT-3.5, steps [74, 76, 77, 78, 84, 89, 90, 92, 95] are likely to attack. We appreciate this insightful question and will include a discussion of it in the camera-ready version to further strengthen our contribution. > *How does the effectiveness of TSA vary with different sparsity levels?* This analysis was provided in **Section 6.7: Hyperparameter Analysis**, where Figure \(c\) evaluated the prediction errors under different sparsity levels. The results showed that TSA's effectiveness steadily improves as $\tau$ increases from 2 to 8, but begins to plateau when $\tau > 9$. This experiment also highlights the core idea behind temporally sparse attacks—attacking more time steps does not significantly improve performance beyond a certain point, while it increases the risk of the attack becoming perceptible. > *Why can additive perturbation be replaced with multiplicative perturbation?* This substitution is made for mathematical convenience, as the two formulations can be transformed into each other based on Equation (4). In the multiplicative perturbation setting, temporal sparsity is simply represented as $||w\||_0 = \tau$ and the perturbation magnitude is constrained by $||w||_1 \le \epsilon$. For the additive setting, the equivalent constraints become $||w\||_0 = \tau$ and $||w||_1 \le \epsilon X_t$, where $X_t$ denotes the value at $t$. > *Since the experimental result does not improve much, why this problem is essential is unclear.* Results in Table 1 showed the average error increase caused by the TSA is **0.057** in MSE, while the average error increase caused by GWN is 0.013. Our results demonstrated **4x** greater error increases by TSA than GWN on average. In certain specific cases (e.g., Weather prediction by LLMTime with LLaMa 2), the error caused by TSA was similar to that caused by GWN. However, the overall results demonstrated that TSA consistently and significantly disrupts these models. > *The proposed autocorrelation-based detection method is mentioned but not thoroughly evaluated.* An adversarial study typically focuses on either attack or defense. This submission aims to perform a temporally sparse attack against LLMs in time series forecasting under black-box and label-free settings. The experiment on attack against defenses was presented in **Section 6.6**, where TSA successfully bypassed filter-based mitigation methods. We concluded that TSA posed a new challenge to LLM safety. An autocorrelation-based detection method was discussed as a potential defense, inspiring future research on protecting LLMs in time series forecasting. We genuinely hope that our clarifications have addressed your concerns. Considering our initial score is terrible, we sincerely hope you might consider improving your review score. Thank you once again for your valuable comments. Any feedback will be appreciated. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I'm still not convinced about the part that additive perturbation be replaced with multiplicative perturbation. If $X_i=0$ at any time step, then a multiplicative perturbation will always result in zero, while additive perturbation will be non-zero. How to deal with this edge case? --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your feedback, particularly your detailed and thought-provoking question. Before directly addressing it, please allow us to provide some background regarding an open problem in adversarial attacks for time series forecasting. Adversarial attacks should be **imperceptible**, meaning the perturbations must remain sufficiently small. In the context of time series forecasting, two forms of additive perturbation constraints have been proposed: \begin{array}{cc} \|\rho\|_1 < \epsilon \quad (1) & \|\rho\|_1 < \epsilon \cdot X_t \quad (2) \end{array} where $\rho$ denotes the perturbation, $X_t$ is the value at time $t$, and $\epsilon$ is the scale constraint. - Proponents of **(1)** argue that this follows standard settings from the computer vision domain. - Proponents of **(2)** counter that (1) can result in perceptible perturbations. For example, if $\epsilon = 0.1$ and $X_t = 0.01$, then (1) allows a perturbation ten times larger than the original value at time $t$, which is easily detectable. Thus, (2)—a **temporally dynamic upper bound**—is more realistic, as it ensures that perturbations are proportional to the value at each specific time point. We align with the reasoning behind **(2)**, which also enables a natural transformation from additive to multiplicative noise. Now, returning to your question: If $X_i = 0$, then according to constraint (2), the upper bound of the perturbation is also zero. As a result, **both the additive perturbation and the multiplicative perturbation at time $i$ must be zero—yielding equivalent behavior**. We hope this explanation is clear, and we would be grateful for any further clarification you might need from our side. We truly value your time, effort, and ideas in reviewing our work. Given that there is still some time before your edition deadline, we would greatly appreciate it if you could share any additional questions, concerns, or suggestions about exploring the potential risks of LLMs in time series forecasting Your insights have already been incredibly helpful, and any feedback will be valuable for our future submissions.
null
null
null
null
null
null
TtBA: Two-third Bridge Approach for Decision-Based Adversarial Attack
Accept (poster)
Summary: The paper proposes a novel decision-based black box attack against image classifiers. The attack is called TtBA and it is based upon exploiting the geometry of the decision boundary. It introduces a notion of the $k_{bridge}$ metric and discusses how it helps in constructing an efficient adversarial example. The attack is demonstrated to outperform the existing methods on a wide range of datasets and models (undefended and defended) in the untargeted and targeted attack settings. Claims And Evidence: The claims made in the paper are supported by empirical observations, formal analysis (Appendix) and extensive experimental results (Section 5 and Appendix). Methods And Evaluation Criteria: In the line 169 targeted attack is formulated via a target image $x_{target}$ with its corresponding label $f(x_{target})$ rather than a more typical formulation via a target label $y_{target}$ i. e. achieving $f(x)=y_target$. It is not clear whether evaluating attack efficiency in the targeted setting with this formulation is reasonable. Other than that the evaluation methods and criteria seem to make sense for this problem. Theoretical Claims: There appear to be no proofs in the main part. The derivation in the Appendix were not carefully checked. Experimental Designs Or Analyses: It is not entirely clear whether the formulation of a targeted attack is reasonable (see the Methods And Evaluation Criteria Section for details). Other than that there appear to be no issues. Adversarial example definition in the line 199 doesn't contain clipping the resulting image to [0, 1], probably to do saving some space in the text. But in the code provided along with the submission it was checked that the clamping is happening. Supplementary Material: I have mostly reviewed Appendix A, G and Figure 6. Relation To Broader Scientific Literature: The paper relates to the previous analysis of the decision boundary curvature and norm vector-based attacks. Essential References Not Discussed: There appear to be no missing essential references. Other Strengths And Weaknesses: Strengths 1. A novel black-box attack outperforming state of the art, as demonstrated in extensive experiments. 2. Studying decision-based attacks is important because they are more practical than other types of black box attacks and can pose a significant threat for safety-critical machine learning applications. Weaknesses 1. Figure 1 is a bit overloaded with notation. It would be good to simplify it. Other Comments Or Suggestions: No further comments. Questions For Authors: 1. Have you observed any difference when analysing the curve of the decision boundary (e. g. Figure3) for undefended and adversarially trained models? 2. Does sensitivity analysis in the Tables 2 and 3 include adversarially robust models? Would you expect some other hyperparameters to work better for them? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's meticulous evaluation and valuable comments, which have greatly helped improve our manuscript. 1. The **problem formulation is reasonable** for two key reasons. **First**, in hard-label black-box attacks (e.g., HSJA, TA, CGBA), where only the model’s output label is available and gradient information is inaccessible, generating an initial adversarial example for a target label in targeted attacks is nearly impossible. As a result, these methods require a target image $ y_{\text{target}}$ as the initial adversarial example. We will explicitly highlight this methodological distinction in our revision to prevent potential confusion for readers. **Second**, HSJA, TA, and CGBA all evaluate attack efficiency under this targeted setting in their experiments. To ensure a fair comparison, we adopt the same setting in our experiments. 2. We agree that the adversarial example definition should explicitly include **clipping to [0,1]**. This was properly implemented in our code but inadvertently omitted from the text. We will add this clarification in the revision. We sincerely thank the reviewer for carefully checking our code. 3. Following your valuable advice, we will carefully **simplify Figure 1** in the revised paper. 4. Agreeing with the reviewer, in Figure 2 and 3, we observe that **models without defense** (i.e., undefended models) usually have decision boundaries with low curvature and large $k_{\text{bridge}}$, while robust models normally have high curvature and small $k_{\text{bridge}}$. The table below shows the relationship between average $k_{\text{bridge}}$ and average $\ell_2$ distortion for successful attacks on CIFAR100 and CIFAR10 datasets across undefended ViT, CNN, and defended WRN models. It demonstrates that **robust WRN models generally have higher average $\ell_2$ distortion and lower $k_{\text{bridge}}$**, suggesting that adversarially trained models are associated with high decision boundary curvature. | Model (Dataset) | ViT (CIFAR100) | WRN (CIFAR100) | CNN (CIFAR10) | WRN (CIFAR10) | |----------------|----------------|----------------|---------------|---------------| | AVG ℓ₂ | 0.779 | 0.991 | 0.180 | 1.198 | | AVG K_bridge | 0.370 | 0.350 | 0.362 | 0.341 | 5. The sensitivity analysis in Tables 2 and 3 does not include adversarially robust models. **Here we add a sensitivity analysis on robust models**. By adjusting the hyperparameters of TtBA, it can certainly yield better performance for robust models, as demonstrated in the following table. Specifically, we modify the setting of $ k = \check{b} \cdot k_\text{bridge} $ by varying the default value of $ \check{b} = 2/3 $ across {0.55, 0.575, 0.60, 0.625, 0.65, 2/3, 0.70}, and evaluate the AUC of two WRN models on the CIFAR-100 and TinyImageNet datasets. The results demonstrate that, for robust models, the setting $ \check{b} = 0.625 $ achieves the best performance in 3 out of 4 experiments, clearly surpassing the $\check{b} = 2/3$ setting. This difference likely arises because robust models can effectively conceal gradient information, causing normal vector estimation to become less reliable. Consequently, assigning a smaller weight to the normal vector can enhance the effectiveness of perturbation optimization. | Dataset (model) | Attack Type | b̌=0.55 | b̌=0.575 | b̌=0.60 | b̌=0.625 | b̌=0.65 | b̌=2/3 | b̌=0.70 | |-------------|-------------|--------|---------|--------|---------|--------|-------|--------| | **CIFAR100** (WRN) | Non-targeted | 8763.6 | 8790.2 | 8657.4 | **8605.4** | 8681.8 | 8784.6 | 8816.2 | | | Targeted | 22786 | 22288 | 21977 | **20799** | 22806 | 22973 | 23172 | | **TinyImageNet** (WRN) | Non-targeted | 31864 | 31230 | 30898 | **29437** | 29978 | 30026 | 30569 | | | Targeted | 121442 | 120874 | **115260** | 115681 | 116891 | 116976 | 117997 | --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for addressing the points raised in my review. I have no further questions and I am keeping my score. --- Reply to Comment 1.1.1: Comment: Thank you again for your careful review and for checking our code. Your comment reminded us to clarify the targeted attack formulation and the definition of adversarial examples, which is crucial for eliminating misunderstandings and improving our work.
Summary: This paper introduces a decision-based black-box adversarial attack, termed Two-third Bridge Approach---TtBA, that focuses on optimizing perturbation directions for attack queries by leveraging normal vectors and the *bridge* direction, to relieve query complexity. This *bridge* direction is a weighted combo of the current perturbation direction and its normal vector, where the weight parameter is k. Through empirical evaluations, the authors show $k=\frac{2}{3}k_{\text{bridge}}$ offers the optimal directional alignment. With validations on various datasets, TtBA shows improved performances over existing non-targeted and targeted attack methods. Claims And Evidence: The claims made in the submission are partially supported by empirical evidence. The major concern of the reviewer is that, does the hypothesis that the decision boundary of DNNs is smooth and locally concave still hold for robust models? Only a robust WideResNet was studied in the experiments. The reviewer finds this to be slightly lacking. Are other adversarial defenses such as input transformation-based ones, and adversarial training techniques tailored for ViTs, etc., relevant under this context? Methods And Evaluation Criteria: The evaluation setups are reasonable and supportive of this work. Theoretical Claims: The reviewer briefly went through the proofs in the Appendix and find that they are supportive of the claims made, but the reviewer did not check the correctness of the proofs. Experimental Designs Or Analyses: Experimental designs are valid and supportive of the effectiveness of this proposed method. Supplementary Material: The reviewer briefly went through the appendix but did not check any proof in detail. Relation To Broader Scientific Literature: The paper makes significant contributions to the broader scientific literature on black-box adversarial attacks, where research on decision-based hard label attacks is essential in theoretical and practical advancements. Essential References Not Discussed: A few major decision-based attacks introduced in [1-2] are missing in the current experimental comparisons. The authors should discuss the relationship of TtBA to these works and/or explain why they were not included. [1] Wan, Jie, Jianhao Fu, Lijin Wang, and Ziqi Yang. “BounceAttack: A Query-Efficient Decision-Based Adversarial Attack by Bouncing into the Wild.” In 2024 IEEE Symposium on Security and Privacy (SP), 1270–86, 2024. https://doi.org/10.1109/SP54263.2024.00068. [2] Park, Jeonghwan, Paul Miller, and Niall McLaughlin. “Hard-Label Based Small Query Black-Box Adversarial Attack.” In 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 3974–83. Waikoloa, HI, USA: IEEE, 2024. https://doi.org/10.1109/WACV57701.2024.00394. Other Strengths And Weaknesses: **Strengths:** - The authors are to be congratulated with the extensive experiments on all 5 datasets with various common model architectures that include CNNs and ViTs. - One of the major concerns, i.e., why 2/3 is the optimal parameter, has been explained in detail and empirically validated (in Figure 4 and Appendix G). **Weaknesses**: - Apart from the concerns mentioned above, the reviewer find that the experiments are quite focused on comparing perturbation size ($\ell_2$ distance) under the same query budget. The reviewer was kind of expecting that the work reports on the inverse, that is, the query complexity reduction under the same perturbation budget? Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments. 1. SOTA studies, including HSJA, TA, CGBA, strongly support the hypothesis that **the decision boundary of DNNs remains smooth and locally concave even for many robustly trained models**. This is because the robust training process does not interfere with normal vector estimation. We conduct additional experiments using the **Towards Robust Vision Transformer (RVT)** defense from [1], which enhances ViT robustness with position-aware attention scaling and patch-wise augmentation. The corresponding decision boundary is plotted in a figure at [decision boundary of robustly trained models](https://anonymous.4open.science/r/TtBA-6ECF/ DecisionBoundaryofRobustViT.pdf). In the figure, the decision boundary remains smooth and locally concave. However, for **input transformation-based defenses such as RandResizePad** in [2], the normal vector estimation process can be disrupted. This causes the decision boundary to lose its smoothness, which is shown at [abnormal decision boundary of RandResizePad](https://anonymous.4open.science/r/TtBA-6ECF/DecisionBoundaryofRandResizePad.pdf). It is important to note that **this issue is not specific to our approach**. All normal vector-based attacks, including HSJA, TA, and CGBA, encounter similar challenges and currently lack effective solutions. Since addressing these specific challenges is beyond the primary scope of our study, we did not include experiments on transformation-based defenses in our evaluation. To strengthen our revised paper, we will expand the literature review to discuss attack methods specifically designed for input transformation-based defenses and highlight their differences from normal vector-based attacks. Additionally, we will update our future work section to explore potential extensions of our method to handle such defenses effectively. 2. We thank the reviewer for pointing out important attack methods such as BounceAttack and SQBA, which we have now **incorporated in the literature review section**. - While **BounceAttack** improves upon HSJA by using orthogonal gradient components and introduces momentum/smooth search mechanisms, it does not address the local optima problem caused by high-curvature decision boundaries, which is the core focus of our work. We would gladly compare with BounceAttack, but the official code is currently inaccessible, preventing us from presenting detailed results. - **SQBA** uses pre-trained surrogate models for gradient estimation and relies on access to the target model's training dataset. In contrast, our decision-based attacks assume no access to the training dataset. Therefore, a direct experimental comparison with SQBA will not be included. Thank you for your understanding! 3. **The reduction in query complexity under the same perturbation budget** is shown below. Following the setup of CGBA, we set the query budget to 10,000 and the maximum $ \ell_2$ perturbation strength to $ \epsilon = 2.5$. We then randomly choose 500 images from ImageNet and compare the Attack Success Rate (ASR) and the average (median) queries. | Attack | Model | VGG-19 | ResNet-50 | Inception-V3 | ViT-B32 | |--------|--------|-------------------|------------------|-------------------|------------------| | HSJA | Query | **2051.1(1071.8)** | **1833.8(1209.5)** | **2851.1(2080.1)** | **1873.9(947.5)** | | | ASR | 61.0% | 38.8% | 57.2% | 59.6% | | CGBA | Query | 2500.9(1528.5) | 3450.7(2679.0) | 3169.3(2363.0) | 2447.8(1797.0) | | | ASR | 88.2% | 52.0% | 74.4% | 79.6% | | TtBA | Query | 2350.8(1481.0) | 3546.6(2754.0) | 3098.8(2175.0) | 2384.4(1781.5) | | | ASR | **93.2%** | **61.8%** | **80.0%** | **80.4%** | - The results show that **TtBA achieves the highest ASR** across all models. HSJA has the lowest average (median) number of queries, but this is due to its much lower ASR. As is well-known, some images contain robust features that require more queries to attack. TtBA, with significantly higher ASR, is able to successfully attack these robust images, thus requiring more queries on average. Meanwhile, with a similar ASR, TtBA outperforms CGBA in terms of average (median) queries. On ResNet-50, TtBA also achieves significantly higher ASR (61.8\%) compared to CGBA (52.0\%). We will include these results in our revised paper. **We believe we have satisfactorily addressed all the concerns raised in our rebuttal. If the reviewer agrees, would you please kindly consider adjusting your rating?** [1] Mao, Xiaofeng, et al. "Towards robust vision transformer." In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition. 2022. [2] Xie, C., Wang, J., et al. "Mitigating adversarial effects through randomization." In arXiv preprint arXiv:1711.01991. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. I feel most of my concerns are resolved. I will be keeping my score of 3: Weak accept. --- Reply to Comment 1.1.1: Comment: Thank you very much for your insightful and constructive comments and for providing essential references. Your suggestions have significantly enhanced our work.
Summary: The manuscript introduces an innovative bridge direction to optimize the adversarial perturbation by linearly combining the current unit perturbation direction with its unit normal vector. Via experiment observation, k= 2/3 k_{bridge} can yield a near-optimal perturbation direction. Besides, the paper designs a simple and effective approach to detect and escape the local optima, making the proposed method better than the SOTA. Claims And Evidence: The novelty claims and theoretical derivations are reasonable. Methods And Evaluation Criteria: The proposed method and the used criteria, including evaluation datasets, are common and representative. Theoretical Claims: I check the theoretical claims and the corresponding proofs are correct. Experimental Designs Or Analyses: The experiments and results are convincing. Supplementary Material: Supplementary material is helpful. Relation To Broader Scientific Literature: The optimization strategy may be somewhat insightful to other fields. Essential References Not Discussed: Not enough. Other Strengths And Weaknesses: Strongs: For targeted attacks, narrow adversarial regions lead to being more easily trapped in local optima. Weakness: Why d_{bridge}^{i} can be ensured to have identical decision bourndary as \hat{d}^{i}, as shown in Figure 1. Other Comments Or Suggestions: The paper includes certain novelty components, but seems to be not enough to get the bar of the ICML. Questions For Authors: In total, the proposed method is only an improvement of existing techniques, mainly based on HSJA, TA, qFool, GeoDA, QEBA, and CGBA. No brand-new insight is found to contribute to AI security fields. Finding a k=2/3k_{bridge} by experiments matched with some theoretical verification, designing an escape scheme to skip the local optima, and so on are not too challenging and very innovative. The performance improvement is not very significant, some are even below the sota results. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable comments. 1. We **perform a binary search** of $k = k_\text{bridge}^{i} \in (0,1]$ to identify $d_k = d_\text{bridge}^{i}$ which **have identical decision boundary** as $\hat{d}^{i}$. - According to Figure 1, when $k$ is very small, direction $d_k$ approaches $\hat{d}^{i}$ and its decision boundary of $d_k$ is smaller than $\hat{d}^{i}$. - When $k=1$, $d_k = \hat{N}^i$, its decision boundary is significantly larger than $\hat{d}^{i}$. **By the intermediate value theorem, there must exist $k \in (0,1]$ such that $d_k$ yields the same decision boundary as $\hat{d}^{i}$**. We will clarify the above in the revised paper. 2. Thank you for raising concern regarding the "brand-new insight". **Our primary innovation** lies in identifying and rigorously analyzing the previously unknown relationship between decision boundary curvature and the optimization of adversarial perturbations. Existing SOTA decision based methods (HSJA, TA, QEBA, and CGBA) have largely overlooked how boundary curvature influences the occurrence of local optima that seriously impacts optimization efficiency and effectiveness. In contrast, our work introduces a **novel and practical curvature metric**, $ k_\text{bridge}$, which provides the first systematic means to quantify and interpret decision boundary geometry. This new understanding allows us to pinpoint precisely why and how adversarial attacks fail under certain geometric conditions, delivering useful insights previously missing from the literature. Leveraging this discovery, we developed TtBA, a significantly **more effective and efficient** decision-based attack method. Further, we introduced a **robust mechanism specifically designed to detect and escape local optima** induced by boundary curvature, directly addressing an important limitation unexplored by previous studies. In summary, rather than merely improving upon prior methods, our research **introduces brand-new conceptual understanding and practical tools** with significant contributions to the AI security field. We will further clarify the above discussion in the revised paper. 3. We acknowledge that the proposed techniques may **appear conceptually simple** at first glance. However, identifying critical yet overlooked issues, including the presence of local optima due to high-curvature decision boundaries, is far from trivial. Developing practical, intuitive, and effective solutions to address such issues further highlights the strength and novelty of our contributions. Hence, **the simplicity of our solution does not diminish its novelty or importance**; rather, it underscores the clarity and practical value of our research. **The novelty of our technical contributions** lies precisely in uncovering significant issues that have not received sufficient attention in existing literature. Despite extensive research, prior SOTA decision based methods have largely ignored how boundary curvature leads to local optimization traps that severely hinder adversarial attacks. Our research uniquely identifies this critical gap and proposes robust, intuitive, and demonstrably effective methods to address it. Therefore, while the proposed solutions might seem intuitive after being introduced, we argue that **recognizing and formulating these specific problems** and subsequently **developing simple yet powerful techniques** constitute substantial and novel contributions to the AI security community. We will further clarify the above discussion in the revised paper. 4. **Our evaluation is rigorous**, covering extensive experiments across five datasets and seven distinct model architectures, representing a **comprehensive and highly challenging benchmark**. It is noteworthy that consistently surpassing SOTA performance across all tests is exceptionally difficult, which is a challenge similarly faced by recent leading methods such as HSJA, TA, and CGBA. Despite this inherent difficulty, our method achieves substantial performance improvements. Specifically in Table 1, TtBA clearly outperforms existing SOTA methods in 103 out of 108 experimented scenarios, with few remaining cases closely matching the best performance. Furthermore, for robust models evaluated in Figure 5, **we intentionally refrained from fine-tuning parameters** to rigorously test our method's robustness and generalization capability. Even under this conservative setting, we demonstrated superior performance in 37 out of 40 cases. Additional targeted parameter tuning can further enhance the effectiveness of TtBA. However, we deliberately emphasized our method's strong general performance and broad applicability across diverse settings, thereby reinforcing the substantial practical value and robustness of our contributions. **We believe we have satisfactorily addressed all the concerns raised in our rebuttal. If the reviewer agrees, would you please kindly consider adjusting your rating?** --- Rebuttal Comment 1.1: Comment: Thank the authors for the careful feedback. After reading the rebuttals of the authors, I think most of my concerns are addressed. I will be willing to raise my rating. --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive and encouraging review. We sincerely appreciate your valuable comments and constructive suggestions, which have helped improve our work.
Summary: The paper proposes the TtBA method for decision-based black-box adversarial attacks. It introduces a new bridge direction, a weighted combination of the current direction and its normal vector, controlled by a weight parameter $k$. Experiments on multiple datasets and models show that TtBA outperforms state-of-the-art methods in both targeted and non-targeted attacks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes. Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: ## Strength - The proposed method improves the performance of decision-based adversarial attacks. - The paper is well written. ## Weakness - The contribution of this paper is a little weak, from my perspective. - Some settings are somewhat empirical and without rational explanation. For example, $k = 2/3k^i_{bridge}$. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: 1. Thank you for raising concerns regarding **the strength of our contributions**. We introduce a fundamentally new and practically valuable metric, $k_\text{bridge}$, specifically designed to quantify decision boundary curvature, a critical but previously unexplored factor in adversarial attacks. This metric reveals vital insights into how geometric properties of decision boundaries directly affect the effectiveness of decision-based attacks. Existing SOTA decision based attack methods such as HSJA, TA, qFool, GeoDA, QEBA, and CGBA **overlook the critical issue of local optima caused by high-curvature boundaries**, significantly reducing their attack effectiveness. Addressing this gap, we make three substantial contributions: (1) We propose $k_\text{bridge}$, the first quantitative metric in literature, to **rigorously measure boundary curvature**, enabling systematic analysis and deeper understanding of adversarial optimization. (2) Using insights from $ k_\text{bridge}$, we uncover a **previously unidentified linear relationship between boundary curvature and optimal perturbation directions**. Leveraging this discovery, we develop the TtBA method for highly effective decision-based black-box attack. (3) We **identify a low attack efficiency problem caused by high boundary curvature** and propose a robust mechanism to detect and escape them, significantly enhancing optimization efficiency and attack success rates. Our extensive experiments across multiple widely-used datasets and models clearly demonstrate the substantial practical impact of our contributions, representing a **significant advancement in adversarial machine learning**. We will further clarify the above discussion in the revised paper. 2. While the setting $k=2/3k_\text{bridge}^{i}$ is empirically motivated, it is also **supported by an extensive sensitivity analysis in Appendix G**. Specifically, we analyze the sensitivity of our method to different settings of $k$ by varying the default value of $2/3$ across {0.55, 0.60, 0.65, 0.70, 0.75} in Table 2, and similarly adjusting other parameters in Table 3. Our results reveal two key findings: first, the current configuration achieves the best performance in 10 out of 16 experimental scenarios; second, alternative parameter values maintain comparable effectiveness. These results demonstrate that, while $k=2/3$ represents the most effective choice, TtBA's performance remains robust to parameter variations, ensuring methodological reliability across different configurations. **We believe we have satisfactorily addressed all the concerns raised in our rebuttal. If the reviewer agrees, would you please kindly consider adjusting your rating?**
null
null
null
null
null
null
Dissecting Submission Limit in Desk-Rejections: A Mathematical Analysis of Fairness in AI Conference Policies
Accept (poster)
Summary: In this paper, the authors highlight that random desk rejection based on per-author submission limits might be unfair. They propose individual and group unfairness definitions to make the AI conference desk rejection policy more fair. The authors propose an LP optimization algorithm to reduce group unfairness (as they show that reducing individual unfairness is computationally intractable). They find that their algorithm outperforms existing policies. Claims And Evidence: - The paper is well-written and easy to read, though a lot of the theoretical findings/justifications seem to be in the appendix - the group unfairness definition is confusing: are the definitions applied for each group in each case? or should there be a maximum taken across all groups (similar to the individual unfairness definition)? or is it assumed that all individuals belong to a single group? in that case, it is unclear how group unfairness differs from average fairness - The utility of the approach is evaluated through a case study. However, the authors could have considered simulation setups, and/or datasets where decisions are public with different simulated policies (e.g. ICLR) Methods And Evaluation Criteria: - The utility of the approach is only evaluated through a case study. However, the authors could have considered simulation setups, and/or datasets where decisions are public with different simulated policies (e.g. ICLR). More experimental validation would strengthen the paper. - The group unfairness definition is confusing when there are multiple groups, and this has not been studied in the experimental validation section Theoretical Claims: - Proposition 5.6 (order of group and individual unfairness seems incorrect): have the authors accidentally switched the signs? the proof seems correct in Appendix B - It is unclear how the approach works for individuals belonging to multiple groups Experimental Designs Or Analyses: - The experimental design makes sense, but is limited. The authors take a case-study based evaluation appraoch, but using more simulation studies would have helped test the validity of the algorithm. They only compare to a simple baseline (which is the current desk rejection approach in CVPR) - Further, approximate solutions for individual unfairness could have been considered. - the group unfairness definition is confusing: are the definitions applied for each group in each case? or should there be a maximum taken across all groups (similar to the individual unfairness definition)? or is it assumed that all individuals belong to a single group? in that case, it is unclear how group unfairness differs from average fairness Supplementary Material: - Appendix, especially Appendix B Relation To Broader Scientific Literature: - The results relate to broader literature in fair selection and fairness in general. It might be worth linking the paper to related works in fair selection (e.g. https://dl.acm.org/doi/abs/10.1145/3391403.3399482 ) Essential References Not Discussed: The authors do not connect to prior works in fair selection and quota based selection (e.g. https://dl.acm.org/doi/abs/10.1145/3391403.3399482, https://arxiv.org/pdf/2204.03046). A sentence summarizing this literature field and explaining how the author's work differs would be helpful. Other Strengths And Weaknesses: *strengths* - The paper is well-written and easy to read, though a lot of the content seems to be in the appendix - the approach seems intuitive and is solving an important problem *weaknesses* - the group unfairness definition is confusing: are the definitions applied for each group in each case? or should there be a maximum taken across all groups (similar to the individual unfairness definition)? or is it assumed that all individuals belong to a single group? in that case, it is unclear how group unfairness differs from average fairness - The utility of the approach is evaluated through a case study. However, the authors could have considered simulation setups, and/or datasets where decisions are public with different simulated policies (e.g. ICLR) Other Comments Or Suggestions: - Proposition 5.6 (order of group and individual unfairness seems incorrect): have the authors accidentally switched the signs? the proof seems correct in Appendix B Questions For Authors: - the group unfairness definition is confusing: are the definitions applied for each group in each case? or should there be a maximum taken across all groups (similar to the individual unfairness definition)? or is it assumed that all individuals belong to a single group? in that case, it is unclear how group unfairness differs from average fairness - can authors describe how their work differs from fair selection (e.g. under quotas)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewers' acknowledgement of this paper’s writing quality and real-world impact. Below, we provide clarifications addressing the weaknesses and questions: ### **Weakness 1 & Question 1: Group Fairness Definition** Our definitions of group fairness are inspired by utilitarian social welfare, which measures the average desk-rejection damage across all authors (see lines 282–283 on page 6). This differs from the conventional notion of group fairness, which assesses fairness between distinct groups. In light of this, we plan to rename our “group fairness” metric to “average fairness” (or “entire fairness”) in the next version to avoid confusion. ### **Weakness 2: Simulation Setups** Although direct testing on real backend desk-rejection data is difficult, we have added evaluation on our approach using public ICLR (2021–2023) data from the OpenReview API. We compared the current desk-rejection method (Algorithm 2, p.8), with our proposed fairness-aware method (Algorithm 1, p.7). Using the Python PuLP library with submission limits $x=[4,6,8,10,12,14]$, we obtained the results shown below: **Table R.1: ICLR ’21 Results** | Method / Limit $x$|4|6| 8 |10|12|14| |---------------------------------|-------|-------|-------|-------|-------|-------| | Cur. Desk. Rej.| 0.112 | 0.059 | 0.033 | 0.021 | 0.013 | 0.009 | | Fair Desk. Rej. | 0.074 | 0.035 | 0.018 | 0.011 | 0.006 | 0.004 | **Table R.2: ICLR ’22 Results** |Method / Submission Limit $x$ |4 |6 | 8 | 10 | 12 | 14 | |---------------------------------|-------|-------|-------|-------|-------|-------| |Cur. Desk. Rej. | 0.112 | 0.059 | 0.035 | 0.023 | 0.013 | 0.007 | |Fair Desk. Rej. | 0.073 | 0.036 | 0.019 | 0.010 | 0.005 | 0.002 | **Table R.3: ICLR ’23 Results** | Method / Limit $x$ | 4 | 6 | 8 | 10 |12 |14| |---------------------------------|-------|-------|-------|-------|-------|-------| | Cur. Desk. Rej. | 0.115 | 0.056 | 0.031 | 0.022 | 0.015 |0.009| | Fair Desk. Rej. | 0.074 | 0.033 | 0.018 | 0.011 | 0.007 |0.004| Here, we take the group fairness metric (Definition 5.5), which represents the average cost function (Definition 5.2) across all authors. For instance, a value of 0.02 indicates that on average an author experiences a 2% paper desk-rejection rate. For more details, please refer to W2 of our response to Reviewer UGt2. ### **Weakness 3: Approximate Solution for Individual Fairness** In practice, one could relax the discrete constraints to make $r$ continuous and use softmax to substitute the infinity norm, leading to an alternative objective: $$\min_{r \in [0,1]^m} \mathsf{Softmax}(1_n - D^{-1}Wr) \quad \text{s.t.} \quad \frac{Wr}{x} \leq 1_n.$$ This formulation allows for smooth optimization with standard convex solvers. We note that the minimizer of this relaxed objective is not guaranteed to be optimal for the original problem, and we leave a detailed empirical evaluation of this approximation for future work. Additionally, existing mixed programming solvers can handle some medium-scale and non-worst-case problems. For additional discussion, please refer to W3 of our response to Reviewer UGt2. ### **Weakness 4: Proposition 5.6** We thank the reviewer for pointing this out. We agree that the proof in Appendix B is correct, and the version in the main text contains a typo where $\leq$ should be $\geq$. We will correct this in our next revision. ### **Weakness 5: Baselines** We appreciate the reviewer’s concern regarding baselines. The number of baselines is limited because we are addressing a novel problem that, to the best of our knowledge, has not been studied before (its novelty has also been acknowledged by Reviewer UGt2 and Reviewer q5TE). Our chosen desk-rejection method serves as the only existing baseline and is widely adopted in top conferences, not just CVPR but also other conferences like KDD. ### **Question 2: Difference Compared with Fair Selection Under Quotas** Thanks for introducing these relevant works, and we will cite them in future revisions. We believe these works are quite different from ours. First, considering [EGGL’20], they introduce a two-group setting where candidates are selected from two different groups, and the selection rates from these groups are forced to be close. In contrast, in our paper, we use the term group fairness in Definition 5.5 and individual fairness in Definition 5.4 to denote the average or maximum cost function across all authors without any pre-defined group division. Besides, although we also have a quota (i.e., paper submission limit) similar to the ranking fairness paper [YXA’23], we do not incorporate mechanisms such as ranking, consumer probability, or examination probability. This means our setting is fundamentally different. ### **References** [EGGL’20] On Fair Selection in the Presence of Implicit Variance. EC 2020. [YXA’23] Vertical Allocation-based Fair Exposure Amortizing in Ranking. SIGIR 2023.
Summary: This paper studies the problem of fairly desk-rejecting papers from conferences, where some of the authors have exceeded per-author submission limits. The paper establishes that this can’t be done without desk-rejecting papers from authors who haven’t violated the limit (since their co-authors might have violated the limit), and develops algorithms for reallocating the costs of desk-rejection away from authors with fewer submissions. Claims And Evidence: The key claims in this paper: - Desk-rejecting based on author submission limits must generate unfair collateral damage. - Finding individually-fail solutions that minimize the maximum cost of desk-rejection to authors is NP hard. - The authors’ LP-based solution to minimizing the average cost succeeds in improving the fairness of desk-rejections. are well supported by the arguments presented in the paper. Methods And Evaluation Criteria: Yes Theoretical Claims: I did not thoroughly check the proofs. Experimental Designs Or Analyses: The authors use a case study, rather than an "experiment", to demonstrate the effect of their proposed system on desk reject. This is sufficient to make their case -- further experiments or analyses would not have provided additional insight.This paper studies the problem of fairly desk-rejecting papers from conferences, where some of the authors have exceeded per-author submission limits. The paper establishes that this can’t be done without desk-rejecting papers from authors who haven’t violated the limit (since their co-authors might have violated the limit), and develops algorithms for reallocating the costs of desk-rejection away from authors with fewer submissions. Supplementary Material: No Relation To Broader Scientific Literature: To my knowledge the fairness of these desk-rejection methods has not been studied before. Essential References Not Discussed: No Other Strengths And Weaknesses: The only real problem with this paper is that it doesn't consider the equilibrium effects of desk-rejection policies. For example, why would authors submit more than the allowed number of papers in the first place? If having a paper desk-rejected is catastrophic for a junior researcher but merely inconvenient for their senior co-author, then maybe they’ll mutually choose to exclude the senior author from the author list (following norms more common in other disciplines like economics). On the other hand, if having fewer submissions reduced your chance of having a paper desk rejected, could we see senior researchers working with fewer junior authors? Wouldn’t this also have negative effects on the careers of junior researchers? It is my strong suspicion that these effects dominate. In other words, the main effect of author submission limit policies is that they change *which* papers are submitted (and with which co-authors). I understand that this problem becomes much harder once one considers the equilibrium, but I don't think a solution is practical unless it does. Other Comments Or Suggestions: NA Questions For Authors: See above: how could you incorporate equilibrium effects into your approach? I suspect this is too big a question for a rebuttal, though I would raise my score substantially if a compelling answer was given. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for acknowledging the novelty of our work and the strong support for our claims. We are pleased to further clarify the motivation behind our paper and to discuss the equilibrium perspective. ### **Weakness 1 & Question 1: Equilibrium Effects** Thank you for this interesting and inspiring comment. We believe that the issue raised is highly worthwhile for future study. However, it relates more to the motivation behind the problem than to the flaw in our technical contributions. 1) Our work demonstrates that even before reaching equilibrium, a submission-limit desk-rejection policy can lead to unfair outcomes. The gaming toward equilibrium mentioned in your comment does not mitigate this unfairness but may, in fact, worsen it. Therefore, we propose alternative approaches that may address the fairness issue. 2) We agree that in the long term, gaming toward equilibrium might occur after applying a fairness-aware desk-rejection strategy. This is an important question for analyzing the effects of different policies at equilibrium, and we leave a detailed study of this topic for future work. We hope that our current work, as the first analysis of the fairness of desk-rejection methods, serves as a strong starting point. 3) To incorporate equilibrium effects into this work, we can formally define the problem as follows. Consider that there are $n$ authors. Each author must decide which co-authors to collaborate with and how many projects to pursue. The cost represents the time or resources required for each project. The naive reward is that a paper is not desk rejected, while the final reward is the paper being accepted. The key question is what the optimal strategy is for each author in selecting collaborators and projects. We acknowledge that this formulation omits several realistic considerations, such as relationships among authors, project idea ownership, resource allocation, and more. Solving for the Nash equilibrium of this game would require a more detailed and concrete formulation, which is beyond the scope of the current paper and rebuttal. Nonetheless, we recognize this as an interesting direction for future work. ### **Weakness 2: Why Authors Submit More Papers Than the Limit** This phenomenon is common among well-known professors who supervise many students and often do not have the time to verify whether the submission limit is exceeded. They may not be aware of the exact number of papers they have submitted. For instance, in conferences like CVPR and ICCV, a 25-paper limit is sometimes necessary to manage submissions, which highlights how busy these prominent professors can be. ### **Weakness 3: Excluding Senior Authors from the Author List** Different research communities have varying traditions regarding authorship. In some cases, it is difficult to exclude senior researchers because they may have provided essential funding or other support. Excluding them could lead to lab politics and harm the relationships between junior and senior researchers, with potentially severe consequences, such as affecting recommendation letters or graduation permissions. ### **Weakness 4: Senior Researchers Working with Fewer Junior Researchers** Most projects require significant hands-on contributions, which senior authors often cannot provide due to time constraints. To maximize their outcomes, senior researchers typically prefer to collaborate with junior researchers on each project. Additionally, many senior researchers have a responsibility to work with junior colleagues, such as PhD advisors with their students or tech leads with their engineers. In this paper, we examine the desk-rejection problem from a conference organizer’s perspective, where over-submission has already occurred. Our goal is to minimize the cost imposed on junior researchers, thereby protecting them from the negative impacts of submission-limit violations. --- Rebuttal Comment 1.1: Comment: I'll stick with my weak accept recommendation. > They may not be aware of the exact number of papers they have submitted. If they're so casually ignorant of the papers they're submitting I might hope they don't mind being left off the author list of a few! > it is difficult to exclude senior researchers This is true. Maybe these limits will help shift the norms in the field. --- Reply to Comment 1.1.1: Comment: Thank you for your positive evaluation and valuable suggestions! Yes, we hope our work may shed light on the AI conference submission system design to create a better and more fair research environment. We appreciate your insightful discussion.
Summary: This paper discusses an interesting fairness issue that occurs in AI conference paper submission scenarios and reveals that the current desk-rejection policy (reject papers when submission limits are exceeded) can unfairly disadvantage early-career researchers, whose submissions may be rejected due to senior co-authors exceeding the paper count limitations. Based on this, the authors first propose the definition of the cost for the author when the paper is rejected, and then provide two fairness definitions based on the cost function, and finally build two optimization frameworks for individual and group fairness, respectively. ## Update after rebuttal The authors have addressed my concerns. Regarding the supplementary empirical evidence and the novelty of this paper, I have changed my recommendation from 'Weak Accept' to ‘Accept'. I believe this paper will provide valuable insights for future related work. Claims And Evidence: This paper makes three main claims: - An ideal system that rejects papers solely based on each author's excessive submissions is mathematically impossible when there are more than 3 authors. This claim is proven by Theorem 4.3 in section 4.2 given Lemma A.6 and A.7. - Optimizing individual fairness is NP-hard, and group fairness can be solved using LP solver. This claim is proven by Theorem 5.11 in section 5.2. - The proposed method is effective compared to current rejection policy. This claim is supported by Example 6.1. Though the case study is clearly presented, it is a hypothetical scenario and does not have real-world data to support it. I understand that collecting such data is very difficult due to the anonymity of submissions and privacy protection, but it is challenging to assess the severity of the fairness issue argued in this paper and whether the proposed method should be adopted for modifications in current AI conference. Methods And Evaluation Criteria: This paper does not provide any experiment, so the evaluation criteria (no benchmark datasets, no baselines) may feel less convincing in terms of practical validation. While the theoretical framework presented is well-grounded and make sense for addressing the problem, the lack of empirical results weakens the paper's ability to demonstrate the real-world effectiveness and applicability. Theoretical Claims: Yes, I have checked the proofs in the appendix for the theoretical claims, and they look correct to me. Experimental Designs Or Analyses: This paper does not have an experimental section. Supplementary Material: Yes, I have reviewed the supplementary material, including the proofs and case studies. Relation To Broader Scientific Literature: The key contribution is related to fairness in machine learning in the context of AI conference paper submissions. To the best of my knowledge, although fairness in machine learning has been discussed in multiple real-world applications, such as NLP tasks or vision tasks, I have not seen any prior research addressing this particular situation. Therefore, from this perspective, this paper is quite novel. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - The novelty is strong. This paper addresses a timely fairness concern in the AI research community that has not been rigorously studied before. - The proofs are rigorous, and the theoretical discussion of the problem is solid. - I believe the paper has potential for practical impact. Weakness: - No empirical analysis, either from the perspective of the severity of the problem or the effectiveness of the method. For example, how can the actual bias mitigation be quantified after applying such a new policy? - The NP-hardness of individual fairness optimization makes this framework impractical for real-world applications. Other Comments Or Suggestions: n fairness literature, group fairness is more about ensuring some parity using statistical measures across different demographic groups, while individual fairness is more about treating similar individuals similarly. From the definitions in this paper, individual fairness is used to measure the worst-case cost for an author, so it might be better to name it directly as “worst-case fairness.” For group fairness, since it uses the average amount, it might be more appropriate to call it “average fairness” to avoid confusion with the conventional definitions used in fairness literature. Questions For Authors: I understand that collecting such data is challenging, but would it be possible to validate the method, even on a small scale, using real-world datasets? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for recognizing the novelty, mathematical rigor, and potential social impact of our work. We appreciate the constructive feedback and address the concerns as follows: ### **Weakness 1: The Severity of the Problem** We acknowledge that direct evaluation on real conference desk‐rejection data is challenging due to data access restrictions. Although a direct evaluation is infeasible at this stage, several indirect observations support the issue’s significance. 1) First, submission numbers in conferences have surged. For instance, NeurIPS increased from 6,743 submissions in 2019 to 15,671 in 2024, and ICML from 3,424 to 9,653 in the same period. 2) Moreover, Table 1 of our paper shows that many major conferences (e.g., CVPR, ICCV, AAAI, IJCAI, KDD, WSDM) have adopted submission-limit-based desk-rejection policies. 3) Besides, we conducted a new analysis on the ICLR 2023 data, collecting 12,451 authors and 3,793 papers using the OpenReview API. Under an 8-paper limit (as applied by IJCAI 2021–2025), we found that 506 papers (13.3%) had authors exceeding the limit, affecting 2,114 authors (17.0%). This new evidence further strengthens the urgency of addressing fairness issues in current policies. ### **Weakness 2 & Question 1: Effectiveness of the Method and Real-World Validation** Although direct testing on real backend desk-rejection data is difficult, we have empirically evaluated our approach using **public ICLR (2021–2023) data from the OpenReview API**. We compared the current desk-rejection method (Algorithm 2, page 8), which rejects all papers with non-compliant authors based solely on submission order, with our proposed group fairness optimization method (Algorithm 1, page 7), which prioritizes rejecting papers from senior researchers with many submissions to better protect junior researchers. Using the Python PuLP library and setting submission limits $x=[4,6,8,10,12,14]$, we obtained the following results: **Table R.1: ICLR ’21 Results** | Method / Limit $x$|4|6| 8 |10|12|14| |---------------------------------|-------|-------|-------|-------|-------|-------| | Cur. Desk. Rej.| 0.112 | 0.059 | 0.033 | 0.021 | 0.013 | 0.009 | | Fair Desk. Rej. | 0.074 | 0.035 | 0.018 | 0.011 | 0.006 | 0.004 | **Table R.2: ICLR ’22 Results** |Method / Submission Limit $x$ |4 |6 | 8 | 10 | 12 | 14 | |---------------------------------|-------|-------|-------|-------|-------|-------| |Cur. Desk. Rej. | 0.112 | 0.059 | 0.035 | 0.023 | 0.013 | 0.007 | |Fair Desk. Rej. | 0.073 | 0.036 | 0.019 | 0.010 | 0.005 | 0.002 | **Table R.3: ICLR ’23 Results** | Method / Limit $x$ | 4 | 6 | 8 | 10 | 12 | 14 | |---------------------------------|-------|-------|-------|-------|-------|-------| | Cur. Desk. Rej. | 0.115 | 0.056 | 0.031 | 0.022 | 0.015 | 0.009 | | Fair Desk. Rej. | 0.074 | 0.033 | 0.018 | 0.011 | 0.007 | 0.004 | The results in these tables use the group fairness metric (Definition 5.5), representing the average cost function (Definition 5.2) across all authors. For example, a value of 0.02 indicates that, on average, an author has 2% of their papers rejected. Since a lower fairness metric indicates a stronger fairness guarantee, we can conclude that our method consistently achieves a significant cost reduction compared to the conventional approach. ### **Weakness 3: NP-Hardness of Individual Fairness** We agree that it may not be practical to directly solve the mixed programming problem in Definition 5.7 and Definition 5.9. First, we would like to clarify that proving the hardness for the individual fairness could be considered a strong technical contribution of our work. By establishing that under submission-limit-based desk-rejection the individual fairness objective may be impossible to optimally optimize, we highlight critical fairness concerns inherent in the desk-rejection mechanism. In real-world scenarios, however, we can relax the discrete constraints on $r$ and employ a softmax transformation to make the infinity norm continuous. Concretely, we consider the alternative objective: $$\min_{r \in [0,1]^m} \mathsf{Softmax}(1_n - D^{-1}Wr) \quad \text{s.t.} \quad (Wr)/x \leq 1_n.$$ This formulation allows for smooth optimization with standard convex solvers. We acknowledge that the minimizer of this relaxed objective is not guaranteed to be the minimizer of the original problem (Definitions 5.7 and 5.9), and we leave an empirical evaluation of this approximation to future work. Another feasible approach is to use existing mixed-integer programming solvers. Considering some instances that are not worst-case and in moderate-sized conferences, this alternative may also yield acceptable solutions in practice. ### **Other Comment 1: Name of Fairness Metrics** We thank the reviewer for the suggestion on the names of fairness metrics. We will adopt your advice and improve the naming in the next version of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. The supplementary experiments are helpful and, in my view, should be included in the revised version of the paper, as empirical validation is important. Also, could you please add the discussion of NP-hardness in the revised version? --- Reply to Comment 1.1.1: Comment: We are glad to hear that our response addressed your concerns! Thank you for your positive feedback on the supplementary experiments. We agree that empirical validation is important and will include these results in the revised version of the paper. We will also add a discussion of the NP-hardness, as suggested. We sincerely appreciate your thoughtful comments and valuable suggestions.
null
null
null
null
null
null
null
null